Android Oreo
Tested with Mini M8S II
(Possible to work on other GXL/S905X boxes. Testing needed)
Screenshot_20180310-115228.png
Screenshot_20180314-142852.png
Hi,
I'm evaluating the right backup service for my needs. This boils down to
comparing Restic and Borg as they represent the "top of the shelf"
solutions currently available on Linux.
Borg : 1.1.0rc3 (from the borg-linux64 binary)
Restic : 0.7.1 (from the restic linux_amd64 binary)
The backup data consists of a live mail repository using the maildir format
and holding 139 GB (2327 dirs, 665456 files).
Keep in mind that this is the result of my own experience, for my own needs
and this is in no way thorough nor exhaustive.
Note: Encryption is "repokey-blake2"
Original size Compressed size Deduplicated
size
This archive: 137.47 GB 112.55 GB 102.30
GB
All archives: 137.47 GB 112.55 GB 102.30
GB
Unique chunks Total chunks
real 75m33.037s
user 23m22.756s
sys 3m51.228s
Original size Compressed size Deduplicated
size
This archive: 137.47 GB 112.55 GB 14.57
MB
All archives: 274.94 GB 225.10 GB 102.32
GB
Unique chunks Total chunks
real 2m13.070s
user 1m55.448s
sys 0m12.652s
Shell# time ./restic_0.7.1_linux_amd64 backup -r
/path/to/BackupTests/Restic/ /path/to/Mail/
enter password for repository:
scan [/path/to/Mail]
scanned 2327 directories, 665464 files in 0:02
[1:48:16] 100.00% 21.440 MiB/s 136.009 GiB / 136.001 GiB 667813 / 667791
items 0 errors ETA 0:00
duration: 1:48:16, 21.44MiB/s
snapshot 9abedefd saved
real 108m23.314s
user 48m2.328s
sys 6m12.984s
Shell# time ./restic_0.7.1_linux_amd64 -r /path/to/BackupTests/Restic/
backup /path/to/Mail/
enter password for repository:
using parent snapshot 9abedefd
scan [/path/to/Mail]
scanned 2327 directories, 665575 files in 0:04
[0:47] 100.00% 2.855 GiB/s 136.010 GiB / 136.010 GiB 667902 / 667902
items 0 errors ETA 0:00
duration: 0:47, 2920.94MiB/s
snapshot 6c90edf6 saved
real 0m55.859s
user 2m10.312s
sys 0m9.364s
Borg is way faster on first pass (1h15m vs 1h48m) but significantly
slower on second pass (2m1s vs 47s)
Borg repo size (103 GB) is smaller than Restic repo size (121 GB)
Shell# time ls -l BorgMount/20170911-181622/
total 0
drwxr-xr-x 1 root root 0 sept. 11 19:48 path
real 0m22.383s
user 0m0.000s
sys 0m0.000s
Shell# time ls -l ResticMount/snapshots/2017-09-11T18\:15\:18+02\:00/
total 0
drwx------ 3 mail mail 0 déc. 7 2014 Mail
real 0m0.003s
user 0m0.000s
sys 0m0.000s
Borg needs 22 seconds to internally build the directory tree, Restic is
instant.
Interesting note : The first visible directory is exactly the specified
backup path "path" (/path/to/Mail) for Borg whereas Restic only keeps the
last path component "Mail" (/path/to/Mail).
Extract some path from the mounted repositories :
Shell# time cp -a BorgMount/20170911-181622/path/to/[...]/Trash
BorgRestore/
real 3m36.534s
user 0m0.396s
sys 0m7.944s
NOTE: CPU usage was spiking at 100% when no disk activity (building
internal listings is my guess) and jumping between 31~67% for disk activity
(actual copy process)
real 6m23.970s
user 0m0.496s
sys 0m13.708s
NOTE: CPU usage never spikes and constantly jumps between 21~53% for the
whole process
The "Trash" directory is 6.3 GB big with 47945 files in it.
Borg is faster by a factor of 2 to restore the exact same data using
about 2x more CPU.
Fetch deep info on mounted repositories :
Shell# time du -s --si ResticMount/snapshots/2017-09-11T18\:15\:18+02\:00/
147G ResticMount/snapshots/2017-09-11T18:15:18+02:00/
real 1m18.590s
user 0m0.800s
sys 0m4.036s
NOTE: CPU usage around 46% for the whole process
Shell# time du -s --si BorgMount/20170911-181622/
138G BorgMount/20170911-181622/
real 5m30.143s
user 0m0.864s
sys 0m4.956s
NOTE: CPU usage at 100% for the whole process
NOTE: Typical use case would be trying to restore a very big file in a very
nested/complex directory hierarchy that would make this impractical using
the "extract/restore" command. Retrieving the said file would be so time
consuming that it would overlap with the next scheduled backup for example.
Shell# ./borg-linux64 create --info --stats --progress
/path/to/BackupTests/Borg::{now:%Y%m%d-%H%M%S} /path/to/Mail/
Failed to create/acquire the lock /path/to/BackupTests/Borg/lock (timeout).
Shell# ./restic_0.7.1_linux_amd64 -r /path/to/BackupTests/Restic backup
/path/to/Mail/
enter password for repository:
using parent snapshot 6c90edf6
scan [/path/to/Mail]
scanned 2327 directories, 665655 files in 0:03
[0:38] 100.00% 3.518 GiB/s 136.039 GiB / 136.039 GiB 667982 / 667982
items 0 errors ETA 0:00
duration: 0:38, 3581.52MiB/s
snapshot 64106e49 saved
NOTE: Here the Restic design have a clear advantage. Quoting the doc : "All
files in a repository are only written once and never modified afterwards.
This allows accessing and even writing to the repository with multiple
clients in parallel".
I can live with the delays but I really wish there was an option to
relocate the ".config" and ".cache" data. I need this because it makes it
easier to copy the data offsite without forgetting anything! I know that
".cache" is disposable bug having this data available when restoring in
case of disaster recovery is a huge gain of time.
Their design geared at "backup-and-push-to-repository" which is nice but
not desired in my environment. I need a
"repository-pulls-backup-from-agent" design. There could be in both tools
an additional "agent" command that would :
Of course, it would be of the administrator responsability to setup
everything accordingly to use either one repokey for every remote host or
script something a bit smarter to use a repokey per host or group of hosts,
whatever suits the needs.
Why such a setup?
Because, in my case at least, the backup server is of critical importance
and network isolated from the other hosts. I really don't want the
"all-hosts-can-contact-the-backup-server" style but the
"only-backup-server-can-contact-hosts" kind of behavior. This also helps to
limit the strain on the backup server. Having all the hosts, with no
predictable backup size, hammering the backup server at the same time
(cronjob) is not desirable, especially on sites with storage on budget :-)
For instance, I currently use a very spartan/crude system but which is
rock solid and never failed once in over two decades. A simple script
which, in sequence, connects via SSH to each host and uses the remote tar
command to perform the backup. SSH's piped stdout/stderr allows to retrieve
the tarball as well as errors and act accordingly. This is not scalable but
highly effective, battle tested and disaster recovery proven! Booting a new
server with some rescue OS and restoring from a tarball works in ALL
conditions, no matter how long it takes :-) But now, I need encryption and
deduplication given the huge sizes of the data to backup, hence my tests
with Borg/Restic which both have nice features AND provide a single file
binary for disaster scenarios.
#To install openssh with ssh daemon
choco install openssh -params '"/SSHServerFeature"' -y
#To enable ssh keyauth
Restart Windows
#To setup ssh keys
https://github.com/PowerShell/Win32-OpenSSH/wiki/ssh.exe-examples
cd ~
ssh-keygen.exe -t rsa -f id_rsa
copy id_rsa.pub .ssh\authorized_keys
Avec nmap c'est tout simple :
-sP pour ne faire "que" un ping.
nmap -sP 192.168.0.0/24
Starting Nmap 7.40 ( https://nmap.org ) at 2018-04-19 14:09 CEST
Nmap scan report for 192.168.0.20
Host is up (0.00015s latency).
Nmap scan report for 192.168.0.40
Host is up (0.00079s latency).
Nmap scan report for 192.168.0.41
Host is up (0.00077s latency).
Nmap scan report for 192.168.0.47
Host is up (0.00071s latency).
Nmap scan report for 192.168.0.50
Host is up (0.00075s latency).
Nmap scan report for 192.168.0.254
Host is up (0.0013s latency).
Nmap done: 256 IP addresses (6 hosts up) scanned in 8.61 seconds
Build process
To build rspamd we recommend to create a separate build directory:
$ mkdir rspamd.build
$ cd rspamd.build
$ cmake ../rspamd
$ make
Alternatively, you can create a distribution package and use it for build your own packages. Here is an example for debian GNU Linux OS:
$ mkdir rspamd.build
$ cd rspamd.build
$ cmake ../rspamd
$ make dist
$ tar xvf rspamd-
$ cd rspamd-
$ debuild
Via... mon chef...
la communication non violente
comment partager sans se friter...
Liste des raccourcis Bash
Command Editing Shortcuts
Ctrl + a – go to the start of the command line
Ctrl + e – go to the end of the command line
Ctrl + k – delete from cursor to the end of the command line
Ctrl + u – delete from cursor to the start of the command line
Ctrl + w – delete from cursor to start of word (i.e. delete backwards one word)
Ctrl + y – paste word or text that was cut using one of the deletion shortcuts (such as the one above) after the cursor
Ctrl + xx – move between start of command line and current cursor position (and back again)
Alt + b – move backward one word (or go to start of word the cursor is currently on)
Alt + f – move forward one word (or go to end of word the cursor is currently on)
Alt + d – delete to end of word starting at cursor (whole word if cursor is at the beginning of word)
Alt + c – capitalize to end of word starting at cursor (whole word if cursor is at the beginning of word)
Alt + u – make uppercase from cursor to end of word
Alt + l – make lowercase from cursor to end of word
Alt + t – swap current word with previous
Ctrl + f – move forward one character
Ctrl + b – move backward one character
Ctrl + d – delete character under the cursor
Ctrl + h – delete character before the cursor
Ctrl + t – swap character under cursor with the previous one
Command Recall Shortcuts
Ctrl + r – search the history backwards
Ctrl + g – escape from history searching mode
Ctrl + p – previous command in history (i.e. walk back through the command history)
Ctrl + n – next command in history (i.e. walk forward through the command history)
Alt + . – use the last word of the previous command
Command Control Shortcuts
Ctrl + l – clear the screen
Ctrl + s – stops the output to the screen (for long running verbose command)
Ctrl + q – allow output to the screen (if previously stopped using command above)
Ctrl + c – terminate the command
Ctrl + z – suspend/stop the command
Bash Bang (!) Commands
Bash also has some handy features that use the ! (bang) to allow you to do some funky stuff with bash commands.
!! – run last command
!blah – run the most recent command that starts with ‘blah’ (e.g. !ls)
!blah:p – print out the command that !blah would run (also adds it as the latest command in the command history)
!$ – the last word of the previous command (same as Alt + .)
!$:p – print out the word that !$ would substitute
!* – the previous command except for the last word (e.g. if you type ‘find some_file.txt /‘, then !* would give you ‘find some_file.txt‘)
!*:p – print out what !* would substitute
Via Shaarli
cd /boot
cp start.elf start.elf_backup && \
perl -pne 's/\x47\xE9362H\x3C\x18/\x47\xE9362H\x3C\x1F/g' < start.elf_backup > start.elf
Merci ;)
The simple answer is web servers should never be run as root for well known security reasons, so this goes for npm commands as well.
To start fresh, remove prior Node.js and npm installs as well as these files/directories:
mv ~/.npmrc ~/.npmrc~prior
mv ~/.npm ~/.npm~prior
mv ~/tmp ~/tmp.~prior
mv ~/.npm-init.js ~/.npm-init.js~prior
Solution: Install Node.js (which comes with npm) as NON root (no sudo)
Download Source Code directly from https://nodejs.org/en/download/
Execute the below as yourself (Linux/OS X)
cd node-v8.1.2 # into expanded source dir
export NODE_PARENT=${HOME}/node-v8.1.2 # put this into your ~/.bashrc
Feel free to change above export to whatever location is appropriate
./configure --prefix=${NODE_PARENT}
make -j4 # for dual core ... use -j8 for quad core CPU
make install
which puts the binaries for Node.js and npm as well as its modules repository into $NODE_PARENT, a $USER owned dir which then allows you to issue subsequent npm install xxx commands as yourself.
To reach the binaries for node and npm alter your PATH environment variables in your ~/.bashrc:
export PATH=${NODE_PARENT}/bin:${PATH}
export NODE_PATH=${NODE_PARENT}/lib/node_modules
Then to install packages into that directory (global), as opposed to the current directory (local) always pass in the -g flag (global):
npm install -g someModule
NOTE - at no time are you executing anything npm or node related as root / sudo.
Problème de détection de niveau d'eau. L'appareil se remplit, mais le niveau n'est pas détecté. Vérifier chambre de compression et pressostat. Pour identifier les pièces, vous l'écrivez sur la barre d'adresse (pressostat lave vaisselle) (Chambre de compression lave vaisselle).
c'est A5 mais faut il changer le pressostat ?
Non, vous soufflez dedans, pour cela vous devez retiré le petit tuyau placé dessus, vous soufflez dans le pressostat vous devez entendre un clic, dans ce cas il est correct, ensuite vous soufflez dans le petit tuyau, il peut être obstrué, percé ou fissuré, vous le suivez il dressent directement dans la chambre de compression, le nettoyer au besoin
Alternative libre à onenote synchronisable avec NextCloud...
a tester
Pour ceux qui cherche une carte mère pour un modèle d'ordinateur en particulier
des grandes marques : acer, hp, dell, etc...
Sharing data
var dat = Dat()
dat.add(function (repo) {
var writer = repo.archive.createFileWriteStream('hello.txt')
writer.write('world')
writer.end(function () { replicate(repo.key) })
})
Downloading data
var Dat = require('dat-js')
var clone = Dat()
clone.add(key, function (repo) {
repo.archive.readFile('hello.txt', function (err, data) {
console.log(data.toString()) // prints 'world'
})
})
matchesRule(rule) : Function. Returns true if the rule matches with the time. See time trigger rules for rule examples.
return {
on = timer = {
function(domoticz)
-- use domoticz.variables(..) here and return true when the timer should go off
end
},
execute = function(domoticz, timer, triggerInfo)
end
}
Merci pour ce lien ! ( via https://shaar.libox.fr/?oEo7eA )
c'est vraiment facile d'apprendre les différents outils avec cette page ;)
Setting Up New Permissions
The new permissions structure is simple. Instead of creating a new permission GRANT for every database, we are just going to GRANT the user basic access to the namespace. MySQL has two wildcard characters: and %. If you need them to be interpreted literally, just escape them with . The following command will grant the necessary permissions to the MySQL user to access and update all the databases in the phabricator namespace.
GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE, SHOW VIEW ON `phabricator\_%`.* TO 'phabric'@'localhost';
Great, now whenever a new Phabricator upgrade comes along that changes the database structure, there shouldn't be much that needs to be done in terms of granting and deleting permissions.