Virtual Machine Manager (virt-manager) doesn’t automatically start your virtual networks. This leads to the following error when starting a vitual machine
Error starting domain: Requested operation is not valid: network 'default' is not active
To solve this error, on Virtual Machine Manger go to Edit->Connection Details->Virtual Networks, select the required network (‘default’ in our case) and press the Start Network button (has a play-button icon). You can avoid having to go through this process by ticking the Autostart checkbox, which will make the network start automatically at boot.
After a recent upgrade to Debian Stretch, my OpenVPN server stopped working. Checking the relevant log file,
/var/log/openvpn.log, I found the following not very helpful error message.
Fri Nov 23 16:46:37 2018 OpenVPN 2.4.0 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Jul 18 2017
Fri Nov 23 16:46:37 2018 library versions: OpenSSL 1.0.2l 25 May 2017, LZO 2.08
Fri Nov 23 16:46:37 2018 daemon() failed or unsupported: Resource temporarily unavailable (errno=11)
Fri Nov 23 16:46:37 2018 Exiting due to fatal error
Fortunately, someone already reported this error to debian and found out that the error is caused by the
LimintNPROC directive in systemd is used to limit the number of forks a service is allowed to make. Removing this limit, might not be the best idea. So, I would suggest that instead of commenting it out, to find out the minimal value that allows OpenVPN to start and work. After some testing, I found that the minimal value that worked for me is
After editing the
/lib/systemd/system/openvpn@.service, you need to reload the systemd service files and restart the OpenVPN server process.
$ sudo systemctl daemon-reload
$ sudo systemctl restart openvpn@server
Sharing data between guest and host system is necessary in many scenarios. If the guest is a Linux system, you can simply add a shared folder that will be automatically mounted. However, this does not work if the guest is Windows. Sometimes, you can simply workaround it by using Samba shares, but in some scenarios network configuration makes it difficult. For example, when using usermode networking, the host machine can’t communicate easily via the network with the guest.
However, there is another way to share folders in virt-manager that actually works for Windows guest – SPICE . The first step is to configure the sharing in
virt-manager. In the VM details view, click on “Add Hardware” and select a “Channel” device. Set the new device name to
org.spice-space.webdav.0 and leave the other fields as-is.
Now start the guest machine and install
spice-webdav on the guest machine. After installing
spice-webdav make sure the “Spice webdav proxy” service is actually running (via
C:\Program File\SPICE webdavd\map-drive.bat will map the shared folder, which is by default
~/Public. If you encounter the following error
System error 67 has occurred.
the network name cannot be found
It means that the
Spice webdav proxy service is not running.
If you want to change the shared folder, you will have to use
virt-viewer instead of
virt-manager, and configure it under File->Preferences.
You can use the
xdg-mime utility to query the default mime-type associations and change them.
xdg-mime query default video/mp4
Will return the
.desktop file associated with the default app to open mp4 files. To change the default association:
xdg-mime default vlc.desktop video/mp4
you need to specify the
desktop file to open files of the specified mime type. To check the mime-type of a given file, use
file -ib filename
Duply is a convenient wrapper around duplicity, a tool for encrypted incremental backups I’ve used for the last couple of years. Recently, after a recent upgrade, my Amazon S3 backups failed, reporting the following error:
'Check your credentials' % (len(names), str(names)))
NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
Boto, the backend duplicity relies on for the Amazon S3 backend, requires to pass authentication parameters through the
AWS_SECRET_ACCESS_KEY environment variables. As different backends require different variables, duply used to make that transparent, one would just set
TARGET_PASS and duply would take care of the rest. However, duply 1.10 broke compatibility and requires you to set the variables yourself. Hence, the fix is to replace the
TARGET_* variables with exported
Google Drive lacks a very basic feature: calculating folder size. There is no solution in the web interface to view the total size of a given directory. There are a couple of dubious looking online “folder size analyzers” request access permissions to your Google Drive and offer you the basic functionality of calculating folder size. While those apps, have a legitimate need for access permission to your account, you may consider given those permissions to random apps a questionable decision.
The (very) useful tool
rclone, which provides
rsync like interface to many cloud storage providers, implements this functionality. After configuring
rclone to work with your Google Drive, use the
size command to determine the size of a given folder:
$ rclone size "gdrive:Pictures/"
Total objects: 1421
Total size: 9.780 GBytes (10501374440 Bytes)
The size is calculated recursively. However, there is no simple way to display the size of each sub-directory recursively.
rsync to sync files from my computer to a FAT formatted SD card. Using the
--update flag to
rsync makes it “skip files that are newer on the receiver”. It seems that it should work as follows: After the first sync, any subsequent syncs will only transfer those files which changed in the meantime. However, I noticed that transfer times are usually longer than expected, which led me to think that things are not working as they should.
--update relies only on modification date, obviously something is wrong with it. After ruling out several other possibilities, I’ve guessed that the modification date of some files get mangled. A quick check about FAT revealed that FAT can only save modification times in 2 second granularity. I’ve also used rsync’s
--archive flag, which among other things attempts to preserve modifications times of the files it copies. But what about FAT’s lower granularity for modification time? That was apparently the culprit. Some files when copied got a modification time which was up to 1 second before the original modification time! Hence, every time I’ve synced, from rsync’s perspective, the target was older than the source, and hence needs to be overwritten.
This can be demonstrated by the following session:
ignorenonframetext as an option to Beamer, causes it to ignore all the text that is not inside a frame. It is useful when you want to add content for the article version of the presentation (or simply script lines for yourself) that would not show in the regular presentation. LyX puts the title elements outside any frame. Therefore, if you use
ignorenonframetext you end up missing the title frame. The solution is to manually wrap the title block (the title, author, institute, etc.) in a frame and append to it
\maketitle. This will cause the title frame to be rendered correctly.