<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Go away, the cloud is full]]></title><description><![CDATA[ps aux | more]]></description><link>http://www.sevangelatos.com/</link><generator>Ghost 1.26</generator><lastBuildDate>Wed, 24 Dec 2025 17:22:57 GMT</lastBuildDate><atom:link href="http://www.sevangelatos.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Using ffmpeg to convert audio cassettes to videos]]></title><description><![CDATA[<div class="kg-card-markdown"><p>The excellent ffmpeg is the swiss army knife of all video processing. Just as a reminder to myself, here's a command line to convert an image and audio file to a video.</p>
<pre><code>ffmpeg -y -loop 1 -framerate 1  -i image.jpg  -i Unknown.mp3 \
    -c:v libx264 -preset medium -tune</code></pre></div>]]></description><link>http://www.sevangelatos.com/using-ffmpeg-to-convert-audio-cassettes-to-videos/</link><guid isPermaLink="false">5ee0d447fedb5d0001066be4</guid><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Wed, 10 Jun 2020 13:04:20 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1585993710444-aad7a0016901?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1585993710444-aad7a0016901?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Using ffmpeg to convert audio cassettes to videos"><p>The excellent ffmpeg is the swiss army knife of all video processing. Just as a reminder to myself, here's a command line to convert an image and audio file to a video.</p>
<pre><code>ffmpeg -y -loop 1 -framerate 1  -i image.jpg  -i Unknown.mp3 \
    -c:v libx264 -preset medium -tune stillimage -crf 23 \
    -vf scale=-1:720 -c:a copy -shortest -pix_fmt yuv420p \
    -movflags +faststart output.mp4
</code></pre>
<p>The options are as follows:</p>
<ul>
<li><strong>-y</strong> Yes, go ahead and overwrite output file. Useful when experimenting.</li>
<li><strong>-loop 1</strong> Loop input stream 1. Makes the single image last forever as a stream.</li>
<li><strong>-framerate 1</strong> Set video frame rate to 1FPS. This saves space. But going below 1 FPS causes compatibility issues with some players.</li>
<li><strong>-c:v libx264</strong> Encode video as h.264 using the excellent x264 library. Good quality, great compatibility.</li>
<li><strong>-preset medium</strong> Use medium quality settings for video encoder. Other options: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow. No need to go overboard for still images.</li>
<li><strong>-tune stillimage</strong> Tune the video encoder for a still image.</li>
<li><strong>-crf 23</strong> Video quality/bitrate. Values of 18 - 25 are reasonable. Lower values give higher quality and bitrates.</li>
<li><strong>-vf scale=-1:720</strong> Scale the video image to 720 pixels of height, preserving aspect ration. Adjust to your needs.</li>
<li><strong>-c:a copy</strong> Copy the audio stream without re-encoding it.</li>
<li><strong>-shortest</strong> Finish encoding when the shortest input stream ends. Since we made the image stream infinite with -loop 1, the audio stream is the shortest.</li>
<li><strong>-pix_fmt yuv420p</strong> Use yuv pixel format for better compatibility.</li>
<li><strong>-movflags +faststart</strong> Arrange all the mp4 headers and stuff at the beginning of the file so that it can be streamed efficiently.</li>
</ul>
<p>Optionally you may use <code>-c:a aac -b:a 192k</code> instead of <code>-c:a copy</code> to re-encode audio as AAC, which is more common in the mp4 container.</p>
<p>So there you go...</p>
 <iframe width="560" height="315" src="https://www.youtube.com/embed/co0zprALGXY" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></div>]]></content:encoded></item><item><title><![CDATA[Yet another rsync based backup script]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Just putting this here for easy reference, or to help any random passers by. I know there are tools that are supposed to automate this like <a href="https://rsnapshot.org/">rsnapshot</a> but I was disappointed to see that rsnapshot is probably <a href="https://github.com/rsnapshot/rsnapshot/issues/191">unmaintained</a></p>
<p>On top of that, this looked like something easy enough to roll</p></div>]]></description><link>http://www.sevangelatos.com/yet-another-rsync-based-backup-script/</link><guid isPermaLink="false">5cc84bb9e6e4e80001a6b61f</guid><category><![CDATA[Linux]]></category><category><![CDATA[Hard Disk]]></category><category><![CDATA[Deduplication]]></category><category><![CDATA[backup]]></category><category><![CDATA[bash]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Tue, 30 Apr 2019 13:36:15 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1523274620588-4c03146581a1?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1523274620588-4c03146581a1?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Yet another rsync based backup script"><p>Just putting this here for easy reference, or to help any random passers by. I know there are tools that are supposed to automate this like <a href="https://rsnapshot.org/">rsnapshot</a> but I was disappointed to see that rsnapshot is probably <a href="https://github.com/rsnapshot/rsnapshot/issues/191">unmaintained</a></p>
<p>On top of that, this looked like something easy enough to roll my own and adapt to my needs. So there you go...</p>
<pre><code class="language-sh">#!/bin/bash
# A script to perform daily/weekly/monthly backups using rsync

SOURCE=&quot;user@remote:/path/some_files&quot;
DESTINATION=&quot;/backups/my_backups&quot;

# How many backups of each type shall we keep?
DAILY_COUNT=7
WEEKLY_COUNT=4
MONTHLY_COUNT=18

DAILY=$(date  &quot;+daily_%Y_%m_%d&quot;)
WEEKLY=$(date  &quot;+weekly_%Y_%W&quot;)
MONTHLY=$(date  &quot;+monthly_%Y_%m&quot;)
ARCHIVE=&quot;archive&quot;
LATEST=&quot;latest&quot;

set -e

# Make sure destination dirs exist
mkdir -p &quot;${DESTINATION}/${ARCHIVE}&quot;
mkdir -p &quot;${DESTINATION}/${LATEST}&quot;

cd &quot;${DESTINATION}&quot;

ABS_LATEST=&quot;$(pwd)/${LATEST}&quot;
# Do the rsync 
rsync -a --quiet --delete --link-dest=&quot;${ABS_LATEST}&quot; -- &quot;${SOURCE}&quot; &quot;${ARCHIVE}/${DAILY}&quot;

# Replace the last latest
/bin/rm -rf -- &quot;${ABS_LATEST}&quot;
/bin/cp -al -- &quot;${ARCHIVE}/${DAILY}&quot; &quot;${ABS_LATEST}&quot;

# Add new weekly if needed
if [ ! -d &quot;${ARCHIVE}/${WEEKLY}&quot; ]; then
    /bin/cp -al -- &quot;${ABS_LATEST}&quot; &quot;${ARCHIVE}/${WEEKLY}&quot;
fi

# Add new monthly if needed
if [ ! -d &quot;${ARCHIVE}/${MONTHLY}&quot; ]; then
    /bin/cp -al -- &quot;${ABS_LATEST}&quot; &quot;${ARCHIVE}/${MONTHLY}&quot;
fi

# Cull archives
cd &quot;${ARCHIVE}&quot;

# Daily culling
/usr/bin/find . -maxdepth 1 -type d -regextype posix-egrep \
    -regex &quot;.*\/daily_[0-9]{4}_[0-9]{2}_[0-9]{2}&quot; -printf &quot;%f\n&quot; |
    sort -r |
    tail -n +$((${DAILY_COUNT} + 1)) |
    while read EXPIRED; do
        #echo &quot;Removing ${EXPIRED}&quot;
        /bin/rm -rf -- &quot;$EXPIRED&quot;
    done

# Weekly culling
/usr/bin/find . -maxdepth 1 -type d -regextype posix-egrep \
    -regex &quot;.*\/weekly_[0-9]{4}_[0-9]{2}&quot; -printf &quot;%f\n&quot; |
    sort -r |
    tail -n +$((${WEEKLY_COUNT} + 1)) |
    while read EXPIRED; do
        #echo &quot;Removing ${EXPIRED}&quot;
        /bin/rm -rf -- &quot;$EXPIRED&quot;
    done

# Monthly culling
/usr/bin/find . -maxdepth 1 -type d -regextype posix-egrep \
    -regex &quot;.*\/monthly_[0-9]{4}_[0-9]{2}&quot; -printf &quot;%f\n&quot; |
    sort -r |
    tail -n +$((${MONTHLY_COUNT} + 1)) |
    while read EXPIRED; do
        #echo &quot;Removing ${EXPIRED}&quot;
        /bin/rm -rf -- &quot;$EXPIRED&quot;
    done
</code></pre>
</div>]]></content:encoded></item><item><title><![CDATA[Setting up owncloud on ubuntu 18.04]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In this article I will walk us throught installing a bare metal owncloud server based on ubuntu server 18.04. Throughout this guide we will assue that our server is named myowncloud.mydomain.com.</p>
<h2 id="basesysteminstallation">Base system installation</h2>
<p>First steps as usual, we download the ubuntu <a href="http://cdimage.ubuntu.com/releases/18.04.2/release/">server install image</a>. If you</p></div>]]></description><link>http://www.sevangelatos.com/setting-up-owncloud-on-ubuntu-18-04/</link><guid isPermaLink="false">5c996c4de6e4e80001a6b600</guid><category><![CDATA[Linux]]></category><category><![CDATA[owncloud]]></category><category><![CDATA[ubuntu]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sun, 14 Apr 2019 17:53:43 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1495757450029-09dbedacbc36?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1495757450029-09dbedacbc36?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Setting up owncloud on ubuntu 18.04"><p>In this article I will walk us throught installing a bare metal owncloud server based on ubuntu server 18.04. Throughout this guide we will assue that our server is named myowncloud.mydomain.com.</p>
<h2 id="basesysteminstallation">Base system installation</h2>
<p>First steps as usual, we download the ubuntu <a href="http://cdimage.ubuntu.com/releases/18.04.2/release/">server install image</a>. If you wish to use full disk encryption, please use the &quot;alternate installer&quot; ubuntu-18.04.x-server-amd64.iso images instead of the &quot;standard&quot; ubuntu-18.04.x-live-server-amd64.iso images that do not currently support advanced LVM options. Next, &quot;burn&quot; the iso on a USB stick and boot it.</p>
<p>Now, if you wish to use full disk encryption, in the &quot;Partition disks&quot; step select: &quot;Guided - user entire disk and setup encrypted LVM&quot;.</p>
<p>During installation I also chose the &quot;Install security updates automatically&quot; option. As for the software selection (tasksel), I only picked the &quot;OpenSSH server&quot; option as I wished to install everything else manually.</p>
<h2 id="systempreparation">System preparation</h2>
<p>After installation is complete, boot the new system and log on to it.<br>
Let's first bring everything up to date and install a firewall.</p>
<pre><code class="language-bash">apt update
apt upgrade -y
apt dist-upgrade -y

#firewall 
ufw allow OpenSSH
ufw enable
</code></pre>
<p>If you did not setup unattended package upgrades during installation you may do so now.</p>
<pre><code class="language-bash">apt install unattended-upgrades
dpkg-reconfigure -p low unattended-upgrades 
</code></pre>
<h2 id="owncloudprerequisites">OwnCloud prerequisites</h2>
<p>Next we will install the owncloud prerequisites in addition to some nice-to have packages like vim and tmux. We will also update our firewall to allow ports 80 and 443 to pass through to apache.</p>
<pre><code class="language-bash">apt-get install \
apache2 \
curl \
mysql-server \
php libapache2-mod-php php-mysql \
certbot python-certbot-apache \
vim tmux wget unattended-upgrades \
php-bz2 php-curl php-gd php-imagick php-intl php-mbstring \
php-xml php-zip php-redis redis

ufw app list
ufw app info &quot;Apache Full&quot;
ufw allow &quot;Apache Full&quot;
</code></pre>
<h2 id="installdyndnsupdatescript">Install dyndns update script</h2>
<p>In my case, I needed to setup dynamic DNS for my server. To do that I chose the excelent <a href="http://freedns.afraid.org/">FreeDNS</a> service that allows you to update your IP using a simple http request done through wget. Go <a href="http://freedns.afraid.org/dynamic/">here</a> to get the command for your address.</p>
<p>So we now edit the crontab to add the updates:</p>
<pre><code class="language-bash">crontab -e
# Add the following lines:
#@hourly sleep 12 ; wget -O - http://freedns.afraid.org/dynamic/update.php?SOME_HASH_HERE &gt;&gt; /tmp/freedns_owncloud.log 2&gt;&amp;1 &amp;
#@reboot sleep 12 ; wget -O - http://freedns.afraid.org/dynamic/update.php?SOME_HASH_HERE &gt;&gt; /tmp/freedns_owncloud.log 2&gt;&amp;1 &amp;
</code></pre>
<h2 id="runcertbottogetsslcertificates">Run certbot to get SSL certificates</h2>
<p>Now that we have our DNS up and running, we need to use certbot to get SSL certificates for our server. This is crucial since using owncloud over the internet without SSL encryption would allow anyone to read our login information and exchanged files. Not a good idea obviously. This is more or less what you should see during this process. At this point, to be able to run certbot, you must configure your router to <strong>allow acessing ports 80 and 443</strong> of your server from the internet.</p>
<pre><code class="language-bash">certbot
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): ****REMOVED****@mailserver.com

-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel: A
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated)  (Enter 'c' to cancel): myowncloud.mydomain.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for myowncloud.mydomain.com
Enabled Apache rewrite module
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/apache2/sites-available/000-default-le-ssl.conf
Enabled Apache socache_shmcb module
Enabled Apache ssl module
Deploying Certificate to VirtualHost /etc/apache2/sites-available/000-default-le-ssl.conf
Enabling available site: /etc/apache2/sites-available/000-default-le-ssl.conf

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Enabled Apache rewrite module
Redirecting vhost in /etc/apache2/sites-enabled/000-default.conf to ssl vhost in /etc/apache2/sites-available/000-default-le-ssl.conf
...
</code></pre>
<p>Now that we got our SSL certificates, we need to configure cron to renew them periodically, as these certificates expire every 3 months. We also need to restart apache after the certificates have been removed.</p>
<pre><code class="language-bash">crontab -e
@daily /usr/bin/certbot renew -n --post-hook &quot;systemctl reload apache2&quot;
</code></pre>
<h2 id="securemysqlserver">Secure mysql server</h2>
<p>Run:</p>
<pre><code class="language-bash">mysql_secure_installation
</code></pre>
<p>Follow the prompts to setup a secure mysql password and remove test databases/users and disable remote root logins.</p>
<h2 id="getlatestversionofowncloud">Get latest version of owncloud</h2>
<p>Go to the <a href="https://owncloud.org/download/#owncloud-server-tar-ball">owncloud downloads</a> page and get the URL of the latest owncloud tarball.</p>
<pre><code class="language-bash">cd /var/www 
wget  https://download.owncloud.org/community/owncloud-10.1.1.tar.bz2 
tar xjf owncloud-10.1.1.tar.bz2
# Change ownership of owncloud files
find /var/www/owncloud \( \! -user www-data -o \! -group root \) -print0 | xargs -r -0 chown www-data:root &amp;&amp; \
  chmod g+w /var/www/owncloud

# Make data directory
mkdir /mnt/owncloud_data
chown www-data:root /mnt/owncloud_data &amp;&amp;   chmod g+w /mnt/owncloud_data 
</code></pre>
<h2 id="configureapache">Configure apache</h2>
<p>First configure the default servername for apache.</p>
<pre><code># Add this line near the end of the file: 
# ServerName myowncloud.mydomain.com
vim /etc/apache2/apache2.conf
</code></pre>
<p>Now edit /etc/apache2/mods-enabled/dir.conf to give precedence to php files by bringing index.php in front of the rest of the file types.</p>
<pre><code class="language-bash">vim /etc/apache2/mods-enabled/dir.conf
</code></pre>
<p>Next, we need to edit our apache configuration. First find the configuration file for the owncloud website:</p>
<pre><code class="language-bash">apache2ctl -t -D DUMP_VHOSTS | grep myowncloud.mydomain.com
</code></pre>
<p>In my case that is /etc/apache2/sites-enabled/000-default-le-ssl.conf.<br>
Now enable the mod_headers apache plugin and then edit the website config file.</p>
<pre><code class="language-bash">a2enmod headers # Enable mod_headers
vim /etc/apache2/sites-enabled/000-default-le-ssl.conf
</code></pre>
<p>Within the config file add the following under the <virtualhost> tag:</virtualhost></p>
<pre><code class="language-xml">&lt;IfModule mod_headers.c&gt;
  Header always set Strict-Transport-Security &quot;max-age=15552000; includeSubDomains&quot;
&lt;/IfModule&gt;
</code></pre>
<p>and set:</p>
<pre><code>DocumentRoot /var/www/owncloud
</code></pre>
<p>Finally, restart apache:</p>
<pre><code>systemctl restart apache2
</code></pre>
<h2 id="configurephpforproduction">Configure php for production</h2>
<p>Make sure the following options in <code>/etc/php/7.2/apache2/php.ini</code> are set as below to reduce unintended information exposure through debugging messages:</p>
<pre><code>display_errors = Off
display_startup_errors = Off
</code></pre>
<p>Also you may want to edit <code>/var/www/owncloud/.user.ini</code> to increase some<br>
limits. Mine currently reads:</p>
<pre><code>upload_max_filesize=1025M
post_max_size=1025M
memory_limit=1024M
mbstring.func_overload=0
always_populate_raw_post_data=-1
default_charset='UTF-8'
output_buffering=0
</code></pre>
<h2 id="configuremysqlownclouduserandprivileges">Configure mysql owncloud user and privileges</h2>
<p>Use your own uniqueue_password for the owncloud user here</p>
<pre><code class="language-bash">mysql -u root -p &lt;&lt;SCRIPT
CREATE DATABASE owncloud;
GRANT ALL ON owncloud.* to 'owncloud'@'localhost' IDENTIFIED BY 'uniqueue_password';
FLUSH PRIVILEGES;
SCRIPT
</code></pre>
<p>You will also need to provide the root user mysqpassword that you had set previously</p>
<h2 id="configurecrontorunowncloudscheduledjobs">Configure cron to run owncloud-scheduled jobs</h2>
<pre><code class="language-bash">crontab -u www-data -e
</code></pre>
<p>Add the line:</p>
<pre><code>*/15  *  *  *  * php -f /var/www/owncloud/cron.php
</code></pre>
<h2 id="configureowncloudthroughwebinterface">Configure owncloud through web interface</h2>
<p>It is now time to configure owncloud through the web interface. Point your browser to http:://myowncloud.mydomain.com</p>
<p>Use the following data:</p>
<ul>
<li>username: Choose one</li>
<li>password: Choose a strong one</li>
<li>data folder: /mnt/owncloud_data</li>
<li>database: MySql/MariaDB</li>
<li>database_user: owncloud</li>
<li>database_password: the one you used in the &quot;Configure mysql owncloud user and privileges&quot; step</li>
<li>database_name: owncloud</li>
<li>database_host: localhost</li>
</ul>
<p>Now once the configuration succeeds, use the username and password that you just configured to log in. You will be transferred to the Admin General Settings page. From there, configure owncloud to use the system cron and then review your pending security and setup warnings. At this point, mine were:<br>
<img src="http://www.sevangelatos.com/content/images/2019/04/Screenshot.png" alt="Setting up owncloud on ubuntu 18.04"></p>
<p>Which both should be resolved once we configure redis.</p>
<h2 id="activateredisasacache">Activate redis as a cache</h2>
<p>Edit <code>/var/www/owncloud/config/config.php</code> and add the following settings:</p>
<pre><code class="language-php">  'memcache.locking' =&gt; '\OC\Memcache\Redis',
  'memcache.local' =&gt; '\OC\Memcache\Redis',
  'redis' =&gt; [
        'host' =&gt; 'localhost',
        'port' =&gt; 6379,
  ],
</code></pre>
<p>Now all your security and setup warnings should be resolved.</p>
<h2 id="fail2ban">Fail2ban</h2>
<p>Fail2ban is a security system that monitors the system logs and manipulates the system firewall on the fly to block hosts that send suspicious requests. We will proceed to install and configure fail2ban to block for 30 minutes, hosts that have more than 5 failed login attempts within the last 10 minutes.</p>
<pre><code class="language-bash">apt install fail2ban
</code></pre>
<p>For fail2ban to work correctly, the owncloud log entries must be in the correct time zone. Check the system time zone and set it correctly in<br>
the owncloud config.php.</p>
<pre><code class="language-bash"># cat /etc/timezone
Europe/Athens
# vim /var/www/owncloud/config/config.php
...
'logtimezone' =&gt; 'Europe/Athens',
...
</code></pre>
<p>cp /etc/fail2ban/fail2ban.conf /etc/fail2ban/fail2ban.local</p>
<p>Create a filter and jail for owncloud</p>
<pre><code class="language-bash">cat &gt; /etc/fail2ban/filter.d/owncloud.conf &lt;&lt;OWNCLOUD_FILTER
[Definition]
failregex={&quot;reqId&quot;:&quot;.*&quot;,&quot;level&quot;:2,&quot;time&quot;:&quot;.*&quot;,&quot;remoteAddr&quot;:&quot;.*&quot;,&quot;user&quot;:&quot;.*&quot;,&quot;app&quot;:&quot;core&quot;,&quot;method&quot;:&quot;.*&quot;,&quot;url&quot;:&quot;.*&quot;,&quot;message&quot;:&quot;Login failed: '.*' \(Remote IP: '&lt;HOST&gt;'\)&quot;}
ignoreregex =

OWNCLOUD_FILTER

cat &gt; /etc/fail2ban/jail.d/owncloud.conf &lt;&lt;OWNCLOUD_JAIL
[owncloud]
enabled = true
port = 80,443
protocol = tcp
filter = owncloud
maxretry = 5
findtime = 600
bantime = 1800
logpath = /mnt/owncloud_data/owncloud.log
OWNCLOUD_JAIL

service fail2ban restart
service fail2ban status
</code></pre>
<p>Now check your configuration by attempting to log in to owncloud 5 times in a row with wrong credentials.</p>
<h2 id="occ">occ</h2>
<p>Owncloud provides the <a href="https://doc.owncloud.org/server/10.1/admin_manual/configuration/server/occ_command.html">occ</a> console application to run several administrative tasks. Occ is a php script that needs to run as the ww-data user. For instance, you can use the following command to get a status report from the owncloud server:</p>
<pre><code>sudo -u www-data php /var/www/owncloud/occ status
</code></pre>
<p>Many more useful commands exist so do check the documentation.</p>
</div>]]></content:encoded></item><item><title><![CDATA[The story with keyboard switching shortcuts]]></title><description><![CDATA[<div class="kg-card-markdown"><p>And so one day, I set to build a lightweight desktop environment around lxde. After some customizations, the thing that bugged me was that I could not set the keyboard layout change shortcut to the Win + Space shortcut that I have grown to like. What's not to like about it.</p></div>]]></description><link>http://www.sevangelatos.com/the-story-with-keyboard-switching-shortcuts/</link><guid isPermaLink="false">5c3e5cb67f92ce00012d91ee</guid><category><![CDATA[Linux]]></category><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Raspbian]]></category><category><![CDATA[Debian]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Tue, 15 Jan 2019 23:09:22 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1543966888-6e858b90d30d?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1543966888-6e858b90d30d?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="The story with keyboard switching shortcuts"><p>And so one day, I set to build a lightweight desktop environment around lxde. After some customizations, the thing that bugged me was that I could not set the keyboard layout change shortcut to the Win + Space shortcut that I have grown to like. What's not to like about it. I found it brilliant when I discovered it in OS X and was delighted to discover that it became the default in Windows 10. So I have standardized to that and not being able to configure it in lxde is a huge bummer for me.</p>
<p>It turns out that by default the lxpanel xkbd switching applet does not allow you to choose Win+Space as a layout switching shortcut. But fear not. You can just add it by adding the line:</p>
<pre><code>grp:win_space_toggle=Win+Space
</code></pre>
<p>Just edit the toggle.cfg file</p>
<pre><code class="language-sh">sudo vim  /usr/share/lxpanel/xkeyboardconfig/toggle.cfg
</code></pre>
<p>And you are done.</p>
<p>You can see all the possible options supported by your X server in</p>
<pre><code class="language-sh">man xkeyboard-config
</code></pre>
<p>If you are more of a barebones type, you can more or less achieve the same effect temporarily by using setxkbmap.</p>
<pre><code class="language-sh">setxkbmap us,gr -option 'grp:win_space_toggle'
</code></pre>
<p>but this does not persist after the current X session ends.</p>
<pre><code>setxkbmap -print
</code></pre>
<p>Prints the current settings.<br>
The freebsd handbook provides some good <a href="https://www.freebsd.org/doc/handbook/x-config.html#x-config-input">examples</a> to set a system-wide default.</p>
<p>I guess a configuration for my case, that I have not tested yet, would be putting something like the following:</p>
<pre><code>Section &quot;InputDevice&quot;
    Identifier &quot;Keyboard1&quot;
    Driver &quot;kbd&quot;

    Option &quot;XkbModel&quot; &quot;pc105&quot;
    Option &quot;XkbLayout&quot; &quot;us,gr&quot;
    Option &quot;XKbOptions&quot; &quot;grp:win_space_toggle&quot;
EndSection
</code></pre>
<p>Under /usr/local/etc/X11/xorg.conf.d/kbd-layout-multi.conf</p>
<p>See also the more extensive <a href="https://www.x.org/releases/X11R7.5/doc/input/XKB-Config.html">XKB config docs</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[OpenCV calibration patterns]]></title><description><![CDATA[<div class="kg-card-markdown"><p>It has been surprisingly difficult to find good opencv calibration patterns. I had found what I though was a good calibration image and ended up wasting my time. Turns out that file <a href="https://github.com/MRPT/mrpt/issues/562">does not consist of squares</a>. So I ended up making some high resolution PNG files for camera calibration.</p></div>]]></description><link>http://www.sevangelatos.com/opencv-calibration-patterns/</link><guid isPermaLink="false">5c0d4a40b21cb5000187a6ea</guid><category><![CDATA[OpenCV]]></category><category><![CDATA[Vision]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sun, 09 Dec 2018 17:52:29 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1522346423789-1f8452345bf9?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1522346423789-1f8452345bf9?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="OpenCV calibration patterns"><p>It has been surprisingly difficult to find good opencv calibration patterns. I had found what I though was a good calibration image and ended up wasting my time. Turns out that file <a href="https://github.com/MRPT/mrpt/issues/562">does not consist of squares</a>. So I ended up making some high resolution PNG files for camera calibration. The sizes of the patterns refer to the number of features (corners) and not the number of squares. So the 8x6 pattern consists of 9x7 squares.</p>
<ul>
<li><a href="https://github.com/sevangelatos/opencv_calibration_playground/raw/master/patterns/8x6_20mm.png">8x6 pattern with 20mm</a> square size</li>
<li><a href="https://github.com/sevangelatos/opencv_calibration_playground/raw/master/patterns/9x7_20mm.png">9x6 pattern with 20mm</a> square size</li>
<li><a href="https://github.com/sevangelatos/opencv_calibration_playground/raw/master/patterns/15x11_30mm.png">15x11 pattern with 30mm</a> square size</li>
</ul>
<p>Not a big deal, but it is nice to be able to grab one of those whenever you need them.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Moving to a git monorepo without losing history]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I am still hesitant to recommend to everyone to simplify their lives and move to a monorepo. But one can certainly go to extremes with many repositories. Juggling a system with tens of repositories is no fun either.</p>
<p>So, how can we re-combine several repositories back to one, without losing</p></div>]]></description><link>http://www.sevangelatos.com/monorepo/</link><guid isPermaLink="false">5bd0e37fb21cb5000187a6ca</guid><category><![CDATA[Programming]]></category><category><![CDATA[git]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sat, 27 Oct 2018 18:46:13 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1531030874896-fdef6826f2f7?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=21c1a038fbb00c20e7bbe6c67016385c" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1531030874896-fdef6826f2f7?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ&s=21c1a038fbb00c20e7bbe6c67016385c" alt="Moving to a git monorepo without losing history"><p>I am still hesitant to recommend to everyone to simplify their lives and move to a monorepo. But one can certainly go to extremes with many repositories. Juggling a system with tens of repositories is no fun either.</p>
<p>So, how can we re-combine several repositories back to one, without losing any history? This is a process that is doable but not entirely straightforward.</p>
<p>The essential steps are the following:</p>
<ul>
<li>Create a new empty repository and commit at least one file.</li>
<li>Add the repositories to be merged, as remotes</li>
<li>Create branches from the master branches of these repositories</li>
<li>Filter the branches to add a directory prefix to each repository</li>
<li>Merge these branches in our new cummulative master (using the magic --allow-unrelated-histories parameter).</li>
<li>Cleanup our repo from the unneded remotes, branches and commits</li>
</ul>
<p>Lets see a script that does just that...</p>
<pre><code class="language-bash">#!/bin/sh
# Creates a new monorepo by fusing multiple repositories

# Child repositories that are going to be fused
CHILDREN=&quot;repo_lib_a repo_lib_b&quot;

# Name of the created monorepo
MONOREPO=&quot;monorepo&quot;

# Exit in case of any error
set -e

# Be verbose
set -x

# create the monorepo
mkdir $MONOREPO
cd $MONOREPO
git init

# Create a first commit. A first commit is needed in order to be able to merge into master afterwards
echo &quot;*~&quot; &gt;.placeholder
git add .placeholder
git commit -m &quot;First commit&quot;
git rm .placeholder
git commit -m &quot;Remove placeholder file&quot;

# Add remotes for all children
for repo in $CHILDREN; do
        git remote add &quot;$repo&quot; &quot;git@github.com:path/${repo}.git&quot;
done

# Fetch all child repositories
git fetch --all

# Checkout all the master branches of the child repositories
for repo in $CHILDREN; do
        git checkout -f -b &quot;${repo}_master&quot; &quot;${repo}/master&quot;
        # Rewrite history to move all repo files into a subdirectory
        export SUBDIRECTORY=&quot;${repo}&quot;
        git filter-branch -f --index-filter '
    git ls-files -s | sed &quot;s-\t-&amp;${SUBDIRECTORY}/-&quot; | GIT_INDEX_FILE=$GIT_INDEX_FILE.new git update-index --index-info &amp;&amp; if [ -f &quot;$GIT_INDEX_FILE.new&quot; ]; then mv &quot;$GIT_INDEX_FILE.new&quot; &quot;$GIT_INDEX_FILE&quot;; fi' --
done

# Switch back to our master branch
git checkout -f master

# Merge all the repositories in our master branch.
for repo in $CHILDREN; do
        git merge --no-commit --allow-unrelated-histories &quot;${repo}_master&quot;

        git commit -a -m &quot;Merge ${repo} in subdir&quot;
done

# remove all child repo branches and remotes
for repo in $CHILDREN; do
        git branch -D &quot;${repo}_master&quot;
        git remote remove &quot;${repo}&quot;
done

# prune all history and do an aggressive gc
git reflog expire --expire=now --all &amp;&amp; git gc --prune=now --aggressive

</code></pre>
<p><a href="https://github.com/sevangelatos/monorepo_scripts/blob/master/move_to_monorepo.sh">Get</a> the script to try it out.</p>
<p>The only caveat in my opinion is that there might be a better way if these repositories were already being used as submodules of a main repository.<br>
In that case, there might be a way to merge that maintains the connection between versions across repositories.</p>
<p>In my case, the repositories were not really connected and I feel that this timeline that assumes that they were just parallel developments, that get merged today, is more true to what actually happenned.</p>
<p>Keeping <a href="https://medium.com/@fredrikmorken/why-you-should-stop-using-git-rebase-5552bee4fed1">true to the timeline</a> of the development might appear messy, but is actually better than some good looking but fake history.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Type safe identifiers in C++]]></title><description><![CDATA[In many cases when writing an API, we need to express the concept of an identifier. In C++ there is a better, type-safe way to do identifiers.]]></description><link>http://www.sevangelatos.com/type-safe-identifiers-in-c/</link><guid isPermaLink="false">5bd45827b21cb5000187a6cf</guid><category><![CDATA[Programming]]></category><category><![CDATA[C++]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sat, 27 Oct 2018 15:36:53 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1522659333390-223b26e7bb00?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=c48c581802de9f6ffdc5b6bd28a3ba5f" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1522659333390-223b26e7bb00?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ&s=c48c581802de9f6ffdc5b6bd28a3ba5f" alt="Type safe identifiers in C++"><p>In many cases when writing an API, we need to express the concept of an identifier. In operating systems, we need to identify resources, like a socket, an open file or a thread. Or when writing for instance a blogging system, we need to identify entities like a comment, a user or a post. In most systems these identifiers take the form of an integer.</p>
<p>In the family of C-like languages, usually we find two different forms of codifying this in an API. On is with by using integers directly. For instance:</p>
<pre><code class="language-cpp">void LikeComment(int user_id, int comment_id);
</code></pre>
<p>Another way is by creating typedefs that masquarade the identifiers us such:</p>
<pre><code class="language-cpp">typedef int UserId;
typedef int CommentId;
void LikeComment(UserId user, CommentId comment);
</code></pre>
<p>This second form is slightly more future-proof, in the sense that it can allow you to easily change the underlying integer type that it is used. If for instance at some point we discover that we need more than 2^31 comments, we can easily switch the underlying type to an int64_t.</p>
<p>On the other hand, arguably, typedefs obfuscate the underlying type. For instance one might want to be able to see that the CommentId is a primitive type and can efficiently be passed by value.</p>
<p>The most important drawback of both methods is that they provide no type safety at all. One could write:</p>
<pre><code class="language-cpp">// Notice the order of the arguments
LikeComment(comment_id, user);
</code></pre>
<p>And the compiler will hapilly do the wrong thing. You are also limited in your use of <a href="https://en.wikipedia.org/wiki/Polymorphism_(computer_science)">polymorphism</a> The following code is illegal:</p>
<pre><code class="language-cpp">void Delete(UserId user);
void Delete(CommentId comment);
</code></pre>
<p>But it is perfectly legal to do crazy stuff like:</p>
<pre><code class="language-cpp">CommentId comment = user1 + user2;
</code></pre>
<p>In C++ there is a better way to solve all these problems. We can define a class that wraps an integer to act as an identifier. At the same time, we can use a template argument to differentiate between different identifiers and simply disallow numeric operations that do not make sense for identifiers.</p>
<pre><code class="language-cpp">template &lt;typename T&gt;
class TypeSafeIdentifier {
 public:
  explicit TypeSafeIdentifier(int id = 0) : id_(id) {}

  /// Get the identifier value as an int
  int value() const noexcept { return id_; }

  bool operator&lt;(TypeSafeIdentifier&lt;T&gt; rhs) const noexcept;
  bool operator==(TypeSafeIdentifier&lt;T&gt; rhs) const noexcept;
  bool operator!=(TypeSafeIdentifier&lt;T&gt; rhs) const noexcept;
 private:
  int id_;
};
</code></pre>
<p>We have added an <code>operator&lt;</code> that might seem out of place. But it is a usefull addition in order to allow us to put our identifiers in ordered containers like std::map. It is still possible to access the underlying integral value in order to eg. store it in a database.</p>
<p>To use this type we do the following:</p>
<pre><code class="language-cpp">class User;
using UserId = TypeSafeIdentifier&lt;User&gt;;
</code></pre>
<p>Notice that the <code>User</code> class does not even need to be defined. Just a forward declaration is sufficient. It is now impossible to mix-up UserIds with CommentIds by accident.</p>
<p>If we use the TypeSafeIdentifier to define the UserId and the CommentId it is now impossible to call the <code>LikeComment</code> function with the wrong order of arguments.</p>
<p>Lets see a more extensive example with cats and dogs :-)</p>
<pre><code class="language-cpp">#include &lt;cassert&gt;
#include &lt;iostream&gt;
#include &lt;map&gt;
#include &lt;string&gt;
#include &quot;type_safe_identifier.h&quot;

class Cat;
using CatId = TypeSafeIdentifier&lt;Cat&gt;;

class Dog;
using DogId = TypeSafeIdentifier&lt;Dog&gt;;

void Feed(DogId dog) { std::cout &lt;&lt; &quot;Feeding dog&quot; &lt;&lt; dog &lt;&lt; std::endl; }

void Feed(CatId cat) { std::cout &lt;&lt; &quot;Feeding cat&quot; &lt;&lt; cat &lt;&lt; std::endl; }

void DeclareParent(DogId parent, DogId child) {
  assert(child != parent);
  std::cout &lt;&lt; &quot;Dog &quot; &lt;&lt; parent &lt;&lt; &quot; is the parent of dog &quot; &lt;&lt; child
            &lt;&lt; std::endl;
}

int main() {
  CatId minty(1);
  DogId lassy(1);
  DogId spot(2);

  // These will do the right thing
  Feed(minty);
  Feed(spot);
  Feed(lassy);

  DeclareParent(lassy, spot);

  // But this would cause an error because a cat cannot ever give birth to
  // a dog.
  // DeclareParent(minty, lassy);

  std::map&lt;DogId, std::string&gt; dog_names = {{lassy, &quot;Lassy&quot;}, {spot, &quot;Spot&quot;}};

  // As this will also cause a compilation error because you can't
  // look up a cat among a collection of dogs.
  // std::cout &lt;&lt; dog_names[minty];

  for (auto it : dog_names) {
    std::cout &lt;&lt; it.second &lt;&lt; &quot;'s DogId is &quot; &lt;&lt; it.first &lt;&lt; std::endl;
  }
}
</code></pre>
<p>You can get the full <a href="https://github.com/sevangelatos/type_safe_identifier/blob/master/type_safe_identifier.h">type_safe_identifier</a> header from <a href="https://github.com/sevangelatos/type_safe_identifier">github</a> and use it on your own projects. It is a very simple and self-contained header file.</p>
<p>There's another example over there at github, claiming that you can't give bath to a cat. Well, that's obviously not true if you ask Maru.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/H4BPyVGL-Kk" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></div>]]></content:encoded></item><item><title><![CDATA[John Carmack on Static code analysis]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<div class="article-text">
            <p>The most important thing I have</p></div></div>]]></description><link>http://www.sevangelatos.com/john-carmack-on-static-code-analysis/</link><guid isPermaLink="false">5bc3b53cb21cb5000187a6bc</guid><category><![CDATA[Programming]]></category><category><![CDATA[C++]]></category><category><![CDATA[John Carmack]]></category><category><![CDATA[Static Code Analysis]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sun, 14 Oct 2018 21:33:04 GMT</pubDate><media:content url="http://www.sevangelatos.com/content/images/2018/10/348193-carmack2.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://www.sevangelatos.com/content/images/2018/10/348193-carmack2.jpg" alt="John Carmack on Static code analysis"><p>This is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<div class="article-text">
            <p>The most important thing I have done as a programmer in recent years is to aggressively pursue static code analysis.&nbsp; Even more valuable than the hundreds of serious bugs I have prevented with it is the change in mindset about the way I view software reliability and code quality.</p>
<p>It is important to say right up front that quality isn’t everything, and acknowledging it isn’t some sort of moral failing.&nbsp; <em>Value</em> is what you are trying to produce, and quality is only one aspect of it, intermixed with cost, features, and other factors.&nbsp; There have been plenty of hugely successful and highly regarded titles that were filled with bugs and crashed a lot; pursuing a Space Shuttle style code development process for game development would be idiotic.&nbsp; Still, quality does matter.</p>
<p>I have always cared about writing good code; one of my important internal motivations is that of the craftsman, and I always want to improve.&nbsp; I have read piles of books with dry chapter titles like “Policies , Standards, and Quality Plans”, and my work with Armadillo Aerospace has put me in touch with the very different world of safety critical software development.</p>
<p>Over a decade ago, during the development of Quake 3, I bought a license for PC-Lint and tried using it – the idea of automatically pointing out flaws in my code sounded great.&nbsp; However, running it as a command line tool and sifting through the reams of commentary that it produced didn’t wind up winning me over, and I abandoned it fairly quickly.</p>
<p>Both programmer count and codebase size have grown by an order of magnitude since then, and the implementation language has moved from C to C++, all of which contribute to a much more fertile ground for software errors. &nbsp;A few years ago, after reading a number of research papers about modern static code analysis, I decided to see how things had changed in the decade since I had tried PC-Lint.</p>
<p>At this point, we had been compiling at warning level 4 with only a very few specific warnings disabled, and warnings-as-errors forced programmers to abide by it.&nbsp; While there were some dusty reaches of the code that had years of accumulated cruft, most of the code was fairly modern.&nbsp; We thought we had a pretty good codebase.</p>
<p><strong>Coverity</strong></p>
<p>Initially, I contacted <a href="http://www.coverity.com/">Coverity</a> and signed up for a demo run.&nbsp; This is serious software, with the licensing cost based on total lines of code, and we wound up with a quote well into five figures.&nbsp; When they presented their analysis, they commented that our codebase was one of the cleanest of its size they had seen (maybe they tell all customers that to make them feel good), but they presented a set of about a hundred issues that were identified.&nbsp; This was very different than the old PC-Lint run.&nbsp; It was very high signal to noise ratio – most of the issues highlighted were clearly incorrect code that could have serious consequences.</p>
<p>This was eye opening, but the cost was high enough that it gave us pause.&nbsp; Maybe we wouldn’t introduce that many new errors for it to catch before we ship.</p>
<p><strong>Microsoft /analyze </strong></p>
<p>I probably would have talked myself into paying Coverity eventually, but while I was still debating it, Microsoft preempted the debate by incorporating their <a href="http://msdn.microsoft.com/en-us/library/d3bbz7tz%28v=VS.100%29.aspx">/analyze</a> functionality into the 360 SDK.&nbsp; /Analyze was previously available as part of the top-end, ridiculously expensive version of Visual Studio, but it was now available to every 360 developer at no extra charge.&nbsp; I read into this that Microsoft feels that game quality on the 360 impacts them more than application quality on Windows does. :-)</p>
<p>Technically, the Microsoft tool only performs local analysis, so it should be inferior to Coverity’s global analysis, but enabling it poured out <em>mountains</em> of errors, far more than Coverity reported.&nbsp; True, there were lots of false positives, but there was also a lot of scary, scary stuff.</p>
<p>I started slowly working my way through the code, fixing up first my personal code, then the rest of the system code, then the game code.&nbsp; I would work on it during odd bits of free time, so the entire process stretched over a couple months.&nbsp; One of the side benefits of having it stretch out was that it conclusively showed that it was pointing out some very important things – during that time there was an epic multi-programmer, multi-day bug hunt that wound up being traced to something that /analyze had flagged, but I hadn’t fixed yet.&nbsp; There were several other, less dramatic cases where debugging led directly to something already flagged by /analyze.&nbsp; These were real issues.</p>
<p>Eventually, I had all the code used to build the 360 executable compiling without warnings with /analyze enabled, so I checked it in as the default behavior for 360 builds.&nbsp; Every programmer working on the 360 was then getting the code analyzed every time they built, so they would notice the errors themselves as they were making them, rather than having me silently fix them at a later time.&nbsp; This did slow down compiles somewhat, but /analyze is by far the fastest analysis tool I have worked with, and it is oh so worth it.</p>
<p>We had a period where one of the projects accidentally got the static analysis option turned off for a few months, and when I noticed and re-enabled it, there were piles of new errors that had been introduced in the interim.&nbsp; Similarly, programmers working just on the PC or PS3 would check in faulty code and not realize it until they got a “broken 360 build” email report.&nbsp; These were demonstrations that the normal development operations were continuously producing these classes of errors, and /analyze was effectively shielding us from a lot of them.</p>
<p>Bruce Dawson has blogged about working with /analysis a number of times: <a href="http://randomascii.wordpress.com/category/code-reliability/">http://randomascii.wordpress.com/category/code-reliability/</a></p>
<p><strong>PVS-Studio</strong></p>
<p>Because we were only using /analyze on the 360 code, we still had a lot of code not covered by analysis – the PC and PS3 specific platform code, and all the utilities that only ran on the PC.</p>
<p>The next tool I looked at was <a href="http://www.viva64.com/en/pvs-studio/">PVS-Studio</a>.&nbsp; It has good integration with Visual Studio, and a convenient demo mode (try it!).&nbsp; Compared to /analyze, PVS-Studio is painfully slow, but it pointed out a number of additional important errors, even on code that was already completely clean to /analyze.&nbsp; In addition to pointing out things that are logically errors, PVS-Studio also points out a number of things that are common patterns of programmer error, even if it is still completely sensible code.&nbsp; This is almost guaranteed to produce some false positives, but damned if we didn’t have instances of those common error patterns that needed fixing.</p>
<p>There are a number of good articles on the PVS-Studio <a href="http://www.viva64.com/en/developers-resources/">site</a>, most with code examples drawn from open source projects demonstrating exactly what types of things are found.&nbsp;&nbsp; I considered adding some representative code analysis warnings to this article, but there are already better documented examples present there.&nbsp; Go look at them, and don’t smirk and think “I would never write that!”</p>
<p><strong>PC-Lint</strong></p>
<p>Finally, I went back to <a href="http://www.gimpel.com/html/pcl.htm">PC-Lint</a>, coupled with &nbsp;<a href="http://www.riverblade.co.uk/products/visual_lint/index.html">Visual Lint</a> for IDE integration.&nbsp; In the grand unix tradition, it can be configured to do just about anything, but it isn’t very friendly, and generally doesn’t “just work”.&nbsp; I bought a five-pack of licenses, but it has been problematic enough that &nbsp;I think all the other developers that tried it gave up on it.&nbsp; The flexibility does have benefits – I was able to configure it to analyze all of our PS3 platform specific code, but that was a tedious bit of work.</p>
<p>Once again, even in code that had been cleaned by both /analyze and PVS-Studio, new errors of significance were found.&nbsp; I made a real effort to get our codebase lint clean, but I didn’t succeed.&nbsp; I made it through all the system code, but I ran out of steam when faced with all the reports in the game code.&nbsp; I triaged it by hitting the classes of reports that I worried most about, and ignored the bulk of the reports that were more stylistic or potential concerns.</p>
<p>Trying to retrofit a substantial codebase to be clean at maximum levels in PC-Lint is probably futile.&nbsp; I did some “green field” programming where I slavishly made every picky lint comment go away, but it is more of an adjustment than most experienced C/C++ programmers are going to want to make.&nbsp; I still need to spend some time trying to determine the right set of warnings to enable to let us get the most benefit from PC-Lint.</p>
<p><strong>Discussion</strong></p>
<p>I learned a lot going through this process.&nbsp; I fear that some of it may not be easily transferable, that without personally going through hundreds of reports in a short amount of time and getting that sinking feeling in the pit of your stomach over and over again, “we’re doing OK” or “it’s not so bad” will be the default responses.</p>
<p>The first step is fully admitting that the code you write is riddled with errors.&nbsp; That is a bitter pill to swallow for a lot of people, but without it, most suggestions for change will be viewed with irritation or outright hostility.&nbsp; You have to <em>want</em> criticism of your code.</p>
<p>Automation is necessary.&nbsp; It is common to take a sort of smug satisfaction in reports of colossal failures of automatic systems, but for every failure of automation, the failures of humans are legion.&nbsp; Exhortations to “write better code” plans for more code reviews, pair programming, and so on just don’t cut it, especially in an environment with dozens of programmers under a lot of time pressure.&nbsp; The value in catching even the small subset of errors that are tractable to static analysis <em>every single time</em> is huge.</p>
<p>I noticed that each time PVS-Studio was updated, it found something in our codebase with the new rules.&nbsp; This seems to imply&nbsp; that if you have a large enough codebase, any class of error that is syntactically legal probably exists there.&nbsp; In a large project, code quality is every bit as statistical as physical material properties – flaws exist all over the place, you can only hope to minimize the impact they have on your users.</p>
<p>The analysis tools are working with one hand tied behind their back, being forced to infer information from languages that don’t necessarily provide what they want, and generally making very conservative assumptions.&nbsp; You should cooperate as much as possible – favor indexing over pointer arithmetic, try to keep your call graph inside a single source file, use explicit annotations, etc.&nbsp; Anything that isn’t crystal clear to a static analysis tool probably isn’t clear to your fellow programmers, either. &nbsp;The classic hacker disdain for “bondage and discipline languages” is short sighted – the needs of large, long-lived, multi-programmer projects are just different than the quick work you do for yourself.</p>
<p>NULL pointers are the biggest problem in C/C++, at least in our code.&nbsp; The dual use of a single value as both a flag and an address causes an incredible number of fatal issues.&nbsp; C++ references should be favored over pointers whenever possible; while a reference is “really” just a pointer, it has the implicit contract of being not-NULL.&nbsp; Perform NULL checks when pointers are turned into references, then you can ignore the issue thereafter. &nbsp;There are a lot of deeply ingrained game programming patterns that are just dangerous, but I’m not sure how to gently migrate away from all the NULL checking.</p>
<p>Printf format string errors were the second biggest issue in our codebase, heightened by the fact that passing an idStr instead of idStr::c_str() almost always results in a crash, but annotating all our variadic functions with /analyze annotations so they are properly type checked kills this problem dead.&nbsp; There were dozens of these hiding in informative warning messages that would turn into crashes when some odd condition triggered the code path, which is also a comment about how the code coverage of our general testing was lacking.</p>
<p>A lot of the serious reported errors are due to modifications of code long after it was written.&nbsp; An incredibly common error pattern is to have some perfectly good code that checks for NULL before doing an operation, but a later code modification changes it so that the pointer is used again without checking.&nbsp; Examined in isolation, this is a comment on code path complexity, but when you look back at the history, it is clear that it was more a failure to communicate preconditions clearly to the programmer modifying the code.</p>
<p>By definition, you can’t focus on everything, so focus on the code that is going to ship to customers, rather than the code that will be used internally.&nbsp; Aggressively migrate code from shipping to isolated development projects.&nbsp; There was a paper recently that noted that all of the various code quality metrics correlated at least as strongly with code size as error rate, making code size alone give essentially the same error predicting ability.&nbsp; Shrink your important code.</p>
<p>If you aren’t deeply frightened about all the additional issues raised by concurrency, you aren’t thinking about it hard enough.</p>
<p>It is impossible to do a true control test in software development, but I feel the success that we have had with code analysis has been clear enough that I will say plainly <strong>it is irresponsible to not use it</strong>.&nbsp; There is objective data in automatic console crash reports showing that Rage, despite being bleeding edge in many ways, is remarkably more robust than most contemporary titles.&nbsp; The PC launch of Rage was unfortunately tragically flawed due to driver problems — I’ll wager AMD does not use static code analysis on their graphics drivers.</p>
<p>The takeaway action should be:&nbsp; If your version of Visual Studio has /analyze available, turn it on and give it a try.&nbsp; If I had to pick one tool, I would choose the Microsoft option.&nbsp; Everyone else working in Visual Studio, at least give the PVS-Studio demo a try.&nbsp; If you are developing commercial software, buying static analysis tools is money well spent.</p>
<p>A final parting comment from twitter:</p>
<p><a href="http://twitter.com/#%21/dave_revell"><strong>Dave Revell</strong> <span style="text-decoration: line-through;">@</span><strong>dave_revell</strong> </a>&nbsp;The more I push code through static analysis, the more I’m amazed that computers boot at all.</p>
<p>&nbsp;</p>
          </div></div>]]></content:encoded></item><item><title><![CDATA[John Carmack on Parallel Implementations]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<div class="article-text">
            <p>I used to <a href="http://cam.ly/blog/2010/12/code-fearlessly/">Code Fearlessly</a> all</p></div></div>]]></description><link>http://www.sevangelatos.com/john-carmack-on-parallel-implementations/</link><guid isPermaLink="false">5bc3b46cb21cb5000187a6b8</guid><category><![CDATA[Programming]]></category><category><![CDATA[John Carmack]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sun, 14 Oct 2018 21:27:12 GMT</pubDate><media:content url="http://www.sevangelatos.com/content/images/2018/10/John_Carmack_E3_2006.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://www.sevangelatos.com/content/images/2018/10/John_Carmack_E3_2006.jpg" alt="John Carmack on Parallel Implementations"><p>This is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<div class="article-text">
            <p>I used to <a href="http://cam.ly/blog/2010/12/code-fearlessly/">Code Fearlessly</a> all the time, tearing up everything whenever I had a thought about a better way of doing something.&nbsp; There was even a bit of pride there — “I’m not afraid to suffer consequences in the quest to Do The Right Thing!”&nbsp; Of course, to be honest, the consequences usually fell on a more junior programmer who had to deal with an irate developer that had something unexpectedly stop working when I tore up the code to make it “better”.</p>
<p>Sure, with everything in source control you can roll back the changes if it catastrophically breaks, but if you did succeed in making some aspect better, there is an incentive to keep pushing forward, even if there is a bit of suffering involved.&nbsp; Somewhat more subtly, there are all sorts of opportunities to avoid making honest comparisons between the new way and the old way.&nbsp; Rolling back code and rebuilding to run a test is a pain, and you aren’t going to do it very often, even if you have a suspicion that things aren’t working quite as well in a particular case you hadn’t considered during the rewrite.</p>
<p>What I try to do nowadays is to implement new ideas in parallel with the old ones, rather than mutating the existing code.&nbsp; This allows easy and honest comparison between them, and makes it trivial to go back to the old reliable path when the spiffy new one starts showing flaws.&nbsp; The difference between changing a console variable to get a different behavior versus running an old exe, let alone reverting code changes and rebuilding, is significant.</p>
<p>For some tasks, this is pretty obvious.&nbsp; If you have a ray tracer, it isn’t hard to see an interface that allows you to have the Trace() function use various kD tree / BVH / BSP back ends, and a similar case can be made for the processing code that builds accelerator structures for them.&nbsp; Missing some pixels?&nbsp; Change over to the other implementation and check it there.</p>
<p>However, some of my most effective uses of this strategy have been more aggressive.&nbsp; Over the years, I have done a number of hardware acceleration conversions from software rendering engines.&nbsp; In the old days, I would basically start from scratch, first implementing the environment rendering, then the characters, then the special effects.&nbsp; There were always lots of little features that got forgotten, and comparing against the original meant playing through the game on two systems at once.</p>
<p>The last two times I did this, I got the software rendering code running on the new platform first, so everything could be tested out at low frame rates, then implemented the hardware accelerated version in parallel, setting things up so you could instantly switch between the two at any time.&nbsp; For a mobile OpenGL ES application being developed on a windows simulator, I opened a completely separate window for the accelerated view, letting me see it simultaneously with the original software implementation.&nbsp; This was a <em>very</em> significant development win.</p>
<p>If the task you are working on can be expressed as a pure function that simply processes input parameters into a return structure, it is easy to switch it out for different implementations.&nbsp; If it is a system that maintains internal state or has multiple entry points, you have to be a bit more careful about switching it in and out.&nbsp; If it is a gnarly mess with lots of internal callouts to other systems to maintain parallel state changes, then you have some cleanup to do before trying a parallel implementation.</p>
<p>There are two general classes of parallel implementations I work with:&nbsp; The reference implementation, which is much smaller and simpler, but will be maintained continuously, and the experimental implementation, where you expect one version to “win” and consign the other implementation to source control in a couple weeks after you have some confidence that it is both fully functional and a real improvement.</p>
<p>It is completely reasonable to violate some generally good coding rules while building an experimental implementation – copy, paste, and find-replace rename is actually a good way to start.&nbsp; Code fearlessly on the copy, while the original remains fully functional and unmolested.&nbsp; It is often tempting to shortcut this by passing in some kind of option flag to existing code, rather than enabling a full parallel implementation.&nbsp; It is a &nbsp;grey area, but I have been tending to find the extra path complexity with the flag approach often leads to messing up both versions as you work, and you usually compromise both implementations to some degree.</p>
<p>Every single time I have undertaken a parallel implementation approach, I have come away feeling that it was beneficial, and I now tend to code in a style that favors it.&nbsp; Highly recommended.</p>
          </div></div>]]></content:encoded></item><item><title><![CDATA[John Carmack on Functional Programming in C++]]></title><description><![CDATA[<div class="kg-card-markdown"><p>So, this is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<div class="article-text">
            <p class="MsoNormal">Probably everyone reading this has</p></div></div>]]></description><link>http://www.sevangelatos.com/john-carmack-on/</link><guid isPermaLink="false">5bc3b0dfb21cb5000187a6b3</guid><category><![CDATA[C++]]></category><category><![CDATA[Programming]]></category><category><![CDATA[John Carmack]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sun, 14 Oct 2018 21:15:37 GMT</pubDate><media:content url="http://www.sevangelatos.com/content/images/2018/10/john_carmack.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://www.sevangelatos.com/content/images/2018/10/john_carmack.jpg" alt="John Carmack on Functional Programming in C++"><p>So, this is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<div class="article-text">
            <p class="MsoNormal">Probably everyone reading this has heard “functional programming” put forth as something that is supposed to bring benefits to software development, or even heard it touted as a silver bullet.&nbsp; However, a trip to <a href="http://en.wikipedia.org/wiki/Functional_programming">Wikipedia</a> for some more information can be initially off-putting, with early references to <a href="http://en.wikipedia.org/wiki/Lambda_calculus">lambda calculus</a> and <a href="http://en.wikipedia.org/wiki/Formal_system">formal systems</a>.&nbsp; It isn’t immediately clear what that has to do with writing better software.</p>
<p class="MsoNormal">My pragmatic summary:&nbsp; A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in.&nbsp; In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention.&nbsp; Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible.</p>
<p class="MsoNormal">I do believe that there is real value in pursuing functional programming, but it would be irresponsible to exhort everyone to abandon their C++ compilers and start coding in <a href="http://en.wikipedia.org/wiki/Lisp_%28programming_language%29">Lisp</a>, <a href="http://en.wikipedia.org/wiki/Haskell_%28programming_language%29">Haskell</a>, or, to be blunt, any other fringe language.&nbsp; To the eternal chagrin of language designers, there are plenty of externalities that can overwhelm the benefits of a language, and game development has more than most fields.&nbsp; We have cross platform issues, proprietary tool chains, certification gates, licensed technologies, and stringent performance requirements on top of the issues with legacy codebases and workforce availability that everyone faces.</p>
<p class="MsoNormal">If you are in circumstances where you can undertake significant development work in a non-mainstream language, I’ll cheer you on, but be prepared to take some hits in the name of progress.&nbsp; For everyone else: <em><strong>No matter what language you work in,</strong> <strong>programming in a functional style provides benefits.&nbsp; You should do it whenever it is convenient, and you should think hard about the decision when it isn’t convenient</strong>.</em>&nbsp; You can learn about lambdas, monads, currying, composing lazily evaluated functions on infinite sets, and all the other aspects of explicitly functionally oriented languages later if you choose.</p>
<p class="MsoNormal">C++ doesn’t encourage functional programming, but it doesn’t prevent you from doing it, and you retain the power to drop down and apply SIMD intrinsics to hand laid out data backed by memory mapped files, or whatever other nitty-gritty goodness you find the need for.</p>
<p>&nbsp;</p>
<p class="MsoNormal"><span style="font-size: 18pt">Pure Functions</span></p>
<p class="MsoNormal">A pure function only looks at the parameters passed in to it, and all it does is return one or more computed values based on the parameters.&nbsp; It has no logical <em>side effects.&nbsp; </em>This is an abstraction of course; every function has side effects at the CPU level, and most at the heap level, but the abstraction is still valuable.</p>
<p class="MsoNormal">It doesn’t look at or update global state.&nbsp; it doesn’t maintain internal state.&nbsp; It doesn’t perform any IO.&nbsp; it doesn’t mutate any of the input parameters.&nbsp; Ideally, it isn’t passed any extraneous data – getting an <span style="font-family: 'Courier New'">allMyGlobals</span> pointer passed in defeats much of the purpose.</p>
<p class="MsoNormal">Pure functions have a lot of nice properties.</p>
<p class="MsoNormal">Thread safety.&nbsp; A pure function with value parameters is completely thread safe.&nbsp; With reference or pointer parameters, even if they are const, you do need to be aware of the danger that another thread doing non-pure operations might mutate or free the data, but it is still one of the most powerful tools for writing safe multithreaded code.</p>
<p class="MsoNormal">You can trivially switch them out for <a href="http://www.sevangelatos.com/john-carmack-on-parallel-implementations/">parallel implementations</a>, or run multiple implementations to compare the results.&nbsp; This makes it much safer to experiment and evolve.</p>
<p class="MsoNormal">Reusability.&nbsp; It is much easier to transplant a pure function to a new environment.&nbsp; You still need to deal with type definitions and any called pure functions, but there is no snowball effect.&nbsp; How many times have you known there was some code that does what you need in another system, but extricating it from all of its environmental assumptions was more work than just writing it over?</p>
<p class="MsoNormal">Testability.&nbsp; A pure function has <em>referential transparency</em>, which means that it will always give the same result for a set of parameters no matter when it is called, which makes it much easier to exercise than something interwoven with other systems.&nbsp;&nbsp; I have never been very responsible about writing test code;&nbsp; a lot of code interacts with enough systems that it can require elaborate harnesses to exercise, and I could often convince myself (probably incorrectly) that it wasn’t worth the effort.&nbsp; Pure functions are trivial to test; the tests look like something right out of a textbook, where you build some inputs and look at the output.&nbsp; Whenever I come across a finicky looking bit of code now, I split it out into a separate pure function and write tests for it.&nbsp; Frighteningly, I often find something wrong in these cases, which means I’m probably not casting a wide enough net.</p>
<p class="MsoNormal">Understandability and maintainability.&nbsp; The bounding of both input and output makes pure functions easier to re-learn when needed, and there are less places for undocumented requirements regarding external state to hide.</p>
<p class="MsoNormal">Formal systems and automated reasoning about software will be increasingly important in the future.&nbsp; <a href="http://www.sevangelatos.com/john-carmack-on-static-code-analysis/">Static code analysis</a> is important today, and transforming your code into a more functional style aids analysis tools, or at least lets the faster local tools cover the same ground as the slower and more expensive global tools.&nbsp; We are a “Get ‘er done” sort of industry, and I do not see formal proofs of whole program “correctness” becoming a relevant goal, but being able to prove that certain classes of flaws are not present in certain parts of a codebase will still be very valuable.&nbsp; We could use some more science and math in our process.</p>
<p class="MsoNormal">Someone taking an introductory programming class might be scratching their head and thinking “aren’t all programs supposed to be written like this?”&nbsp; The reality is that far more programs are <a href="http://en.wikipedia.org/wiki/Big_ball_of_mud">Big Balls of Mud</a> than not.&nbsp; Traditional imperative programming languages give you escape hatches, and they get used all the time.&nbsp; If you are just writing throwaway code, do whatever is most convenient, which often involves global state.&nbsp; If you are writing code that may still be in use a year later, balance the convenience factor against the difficulties you will inevitably suffer later.&nbsp; Most developers are not very good at predicting the future time integrated suffering their changes will result in.</p>
<p>&nbsp;</p>
<p class="MsoNormal"><span style="font-size: 18pt">Purity In Practice</span></p>
<p class="MsoNormal">Not everything can be pure; unless the program is only operating on its own source code, at some point you need to interact with the outside world.&nbsp; It can be fun in a puzzly sort of way to try to push purity to great lengths, but the pragmatic break point acknowledges that side effects are necessary at some point, and manages them effectively.</p>
<p class="MsoNormal">It doesn’t even have to be all-or-nothing in a particular function.&nbsp; There is a continuum of value in how pure a function is, and the value step from almost-pure to completely-pure is smaller than that from spaghetti-state to mostly-pure.&nbsp; Moving a function towards purity improves the code, even if it doesn’t reach full purity.&nbsp; A function that bumps a global counter or checks a global debug flag is not pure, but if that is its only detraction, it is still going to reap most of the benefits.</p>
<p class="MsoNormal">Avoiding the worst in a broader context is generally more important than achieving perfection in limited cases.&nbsp; If you consider the most toxic functions or systems you have had to deal with, the ones that you know have to be handled with tongs and a face shield, it is an almost sure bet that they have a complex web of state and assumptions that their behavior relies on, and it isn’t confined to their parameters.&nbsp; Imposing some discipline in these areas, or at least fighting to prevent more code from turning into similar messes, is going to have more impact than tightening up some low level math functions.</p>
<p class="MsoNormal">The process of refactoring towards purity generally involves disentangling computation from the environment it operates in, which almost invariably means more parameter passing.&nbsp; This seems a bit curious – greater verbosity in programming languages is broadly reviled, and functional programming is often associated with code size reduction.&nbsp; The factors that allow programs in functional languages to sometimes be more concise than imperative implementations are pretty much orthogonal to the use of pure functions — garbage collection, powerful built in types, pattern matching, list comprehensions, function composition, various bits of syntactic sugar, etc.&nbsp; For the most part, these size reducers don’t have much to do with being functional, and can also be found in some imperative languages.</p>
<p class="MsoNormal">You <em>should</em> be getting irritated if you have to pass a dozen parameters into a function; you may be able to refactor the code in a manner that reduces the parameter complexity.</p>
<p class="MsoNormal">The lack of any language support in C++ for maintaining purity is not ideal.&nbsp; If someone modifies a widely used foundation function to be non-pure in some evil way, everything that uses the function also loses its purity.&nbsp; This sounds disastrous from a formal systems point of view, but again, it isn’t an all-or-nothing proposition where you fall from grace with the first sin.&nbsp; Large scale software development is unfortunately statistical.</p>
<p class="MsoNormal">It seems like there is a sound case for a pure keyword in future C/C++ standards.&nbsp; There are close parallels with const – an optional qualifier that allows compile time checking of programmer intention and will never hurt, and could often help, code generation.&nbsp; The D programming language does offer a pure keyword:&nbsp; <a href="http://www.d-programming-language.org/function.html">http://www.d-programming-language.org/function.html</a>&nbsp; Note their distinction between weak and strong purity – you need to also have const input references and pointers to be strongly pure.</p>
<p class="MsoNormal">In some ways, a language keyword is over-restrictive — a function can still be pure even if it calls impure functions, as long as the side effects don’t escape the outer function.&nbsp; Entire programs can be considered pure functional units if they only deal with command line parameters instead of random file system state.</p>
<p class="MsoNormal"><span style="font-size: 18pt">Object Oriented Programming</span></p>
<p class="MsoNormal"><em><a href="https://twitter.com/#%21/mfeathers"><strong><span style="font-family: 'Calibri','sans-serif';color: blue">Michael Feathers</span></strong> <span class="username"><s><span style="color: blue">@</span></s></span><span class="username"><strong><span style="color: blue">mfeathers</span></strong></span> </a>&nbsp;&nbsp;OO makes code understandable by encapsulating moving parts. FP makes code understandable by minimizing moving parts.</em></p>
<p class="MsoNormal">The “moving parts” are mutating states.&nbsp; Telling an object to change itself is lesson one in a basic object oriented programming book, and it is deeply ingrained in most programmers, but it is anti-functional behavior.&nbsp; Clearly there is some value in the basic OOP idea of grouping functions with the data structures they operate on, but if you want to reap the benefits of functional programming in parts of your code, you have to back away from some object oriented behaviors in those areas.</p>
<p class="MsoNormal">Class methods that can’t be const are not pure by definition, because they mutate some or all of the potentially large set of state in the object.&nbsp; They are not thread safe, and the ability to incrementally poke and prod objects into unexpected states is indeed a significant source of bugs.</p>
<p class="MsoNormal">Const object methods can still be technically pure if you don’t count the implicit <em>const this</em> pointer against them, but many object are large enough to constitute a sort of global state all their own, blunting some of the clarity benefits of pure functions.&nbsp; Constructors can be pure functions, and generally should strive to be – they take arguments and return an object.</p>
<p class="MsoNormal">At the tactical programming level, you can often work with objects in a more functional manner, but it may require changing the interfaces a bit.&nbsp; At id we went over a decade with an idVec3 class that had a self-mutating <span style="font-family: 'Courier New'">void Normalize() </span>method, but no corresponding <span style="font-family: 'Courier New'">idVec3 Normalized() const</span> method.&nbsp; Many string methods were similarly defined as working on themselves, rather than returning a new copy with the operation performed on it – <span style="font-family: 'Courier New'">ToLowerCase(), StripFileExtension(),</span> etc.</p>
<p class="MsoNormal"><span style="font-size: 18pt">Performance Implications</span></p>
<p class="MsoNormal">In almost all cases, directly mutating blocks of memory is the speed-of-light optimal case, and avoiding this is spending some performance.&nbsp; Most of the time this is of only theoretical interest; we trade performance for productivity all the time.</p>
<p class="MsoNormal">Programming with pure functions will involve more copying of data, and in some cases this clearly makes it the incorrect implementation strategy due to performance considerations.&nbsp; As an extreme example, you can write a pure <span style="font-family: 'Courier New'">DrawTriangle()</span> function that takes a framebuffer as a parameter and returns a completely new framebuffer with the triangle drawn into it as a result.&nbsp; Don’t do that.</p>
<p class="MsoNormal">Returning everything by value is the natural functional programming style, but relying on compilers to always perform <a href="http://en.wikipedia.org/wiki/Return_value_optimization">return value optimization</a> can be hazardous to performance, so passing reference parameter for output of complex data structures is often justifiable, but it has the unfortunate effect of preventing you from declaring the returned value as const to enforce <a href="http://en.wikipedia.org/wiki/Single_assignment#Single_assignment">single assignment</a>.</p>
<p class="MsoNormal">There will be a strong urge in many cases to just update a value in a complex structure passed in rather than making a copy of it and returning the modified version, but doing so throws away the thread safety guarantee and should not be done lightly.&nbsp; List generation is often a case where it is justified.&nbsp; The pure functional way to append something to a list is to return a completely new copy of the list with the new element at the end, leaving the original list unchanged.&nbsp; Actual functional languages are implemented in ways that make this not as disastrous as it sounds, but if you do this with typical C++ containers you will die.</p>
<p class="MsoNormal">A significant mitigating factor is that performance today means parallel programming, which usually requires more copying and combining than in a single threaded environment even in the optimal performance case, so the penalty is smaller, while the complexity reduction and correctness benefits are correspondingly larger.&nbsp; When you start thinking about running, say, all the characters in a game world in parallel, it starts sinking in that the object oriented approach of updating objects has some deep difficulties in parallel environments.&nbsp; Maybe if all of the object just referenced a read only version of the world state, and we copied over the updated version at the end of the frame…&nbsp; Hey, wait a minute…</p>
<p>&nbsp;</p>
<p class="MsoNormal"><span style="font-size: 18pt">Action Items</span></p>
<p class="MsoNormal">Survey some non-trivial functions in your codebase and track down every bit of external state they can reach, and all possible modifications they can make.&nbsp; This makes great documentation to stick in a comment block, even if you don’t do anything with it.&nbsp; If the function can trigger, say, a screen update through your render system, you can just throw your hands up in the air and declare the set of all effects beyond human understanding.</p>
<p class="MsoNormal">The next task you undertake, try from the beginning to think about it in terms of the real computation that is going on.&nbsp; Gather up your input, pass it to a pure function, then take the results and do something with it.</p>
<p class="MsoNormal">As you are debugging code, make yourself more aware of the part mutating state and hidden parameters play in obscuring what is going on.</p>
<p class="MsoNormal">Modify some of your utility object code to return new copies instead of self-mutating, and try throwing const in front of practically every non-iterator variable you use.</p>
<p>&nbsp;</p>
<p class="MsoNormal">Additional references:</p>
<p><a href="http://www.haskell.org/haskellwiki/Introduction">http://www.haskell.org/haskellwiki/Introduction</a></p>
<p><a href="http://lisperati.com/">http://lisperati.com/</a></p>
<p><a href="http://www.johndcook.com/blog/tag/functional-programming/">http://www.johndcook.com/blog/tag/functional-programming/</a></p>
<p><a href="http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf">http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf</a></p>
<p><a href="http://channel9.msdn.com/Shows/Going+Deep/Lecture-Series-Erik-Meijer-Functional-Programming-Fundamentals-Chapter-1">http://channel9.msdn.com/Shows/Going+Deep/Lecture-Series-Erik-Meijer-Functional-Programming-Fundamentals-Chapter-1</a></p>
<p><a href="http://www.cs.utah.edu/%7Ehal/docs/daume02yaht.pdf">http://www.cs.utah.edu/~hal/docs/daume02yaht.pdf</a></p>
<p><a href="http://www.cs.cmu.edu/%7Ecrary/819-f09/Backus78.pdf">http://www.cs.cmu.edu/~crary/819-f09/Backus78.pdf</a></p>
<p><a href="http://fpcomplete.com/the-downfall-of-imperative-programming/">http://fpcomplete.com/the-downfall-of-imperative-programming/</a></p>
<p>&nbsp;</p>
          </div></div>]]></content:encoded></item><item><title><![CDATA[John Carmack on Latency Mitigation Strategies]]></title><description><![CDATA[<div class="kg-card-markdown"><p>So, this is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<p><b>Abstract</b></p>
<p>Virtual reality (VR) is</p></div>]]></description><link>http://www.sevangelatos.com/john-carmack-on-latency-mitigation-strategies/</link><guid isPermaLink="false">5bc3ae60b21cb5000187a6ae</guid><category><![CDATA[Programming]]></category><category><![CDATA[John Carmack]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sun, 14 Oct 2018 21:09:53 GMT</pubDate><media:content url="http://www.sevangelatos.com/content/images/2018/10/tec01_16.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://www.sevangelatos.com/content/images/2018/10/tec01_16.jpg" alt="John Carmack on Latency Mitigation Strategies"><p>So, this is an mirror of a post from John Carmack. Recently I learned that his articles on #AltDevBlog are no longer acessible. So, in order to archive them, I am re-posting them here. These articles are definitely good reads and worth to be preserved.</p>
<p><b>Abstract</b></p>
<p>Virtual reality (VR) is one of the most demanding human-in-the-loop applications from a latency standpoint.&nbsp; The latency between the physical movement of a user’s head and updated photons from a head mounted display reaching their eyes is one of the most critical factors in providing a high quality experience.</p>
<p>Human sensory systems can detect very small relative delays in parts of the visual or, especially, audio fields, but when absolute delays are below approximately 20 milliseconds they are generally imperceptible.&nbsp; Interactive 3D systems today typically have latencies that are several times that figure, but alternate configurations of the same hardware components can allow that target to be reached.</p>
<p>A discussion of the sources of latency throughout a system follows, along with techniques for reducing the latency in the processing done on the host system.</p>
<p><b>Introduction</b></p>
<p>Updating the imagery in a head mounted display (HMD) based on a head tracking sensor is a subtly different challenge than most human / computer interactions.&nbsp; With a conventional mouse or game controller, the user is consciously manipulating an interface to complete a task, while the goal of virtual reality is to have the experience accepted at an unconscious level.</p>
<p>Users can adapt to control systems with a significant amount of latency and still perform challenging tasks or enjoy a game; many thousands of people enjoyed playing early network games, even with 400+ milliseconds of latency between pressing a key and seeing a response on screen.</p>
<p>If large amounts of latency are present in the VR system, users may still be able to perform tasks, but it will be by the much less rewarding means of using their head as a controller, rather than accepting that their head is naturally moving around in a stable virtual world.&nbsp; Perceiving latency in the response to head motion is also one of the primary causes of simulator sickness.&nbsp; Other technical factors that affect the quality of a VR experience, like head tracking accuracy and precision, may interact with the perception of latency, or, like display resolution and color depth, be largely orthogonal to it.</p>
<p>A total system latency of 50 milliseconds will feel responsive, but still subtly lagging.&nbsp; One of the easiest ways to see the effects of latency in a head mounted display is to roll your head side to side along the view vector while looking at a clear vertical edge.&nbsp; Latency will show up as an apparent tilting of the vertical line with the head motion; the view feels “dragged along” with the head motion.&nbsp; When the latency is low enough, the virtual world convincingly feels like you are simply rotating your view of a stable world.</p>
<p>Extrapolation of sensor data can be used to mitigate some system latency, but even with a sophisticated model of the motion of the human head, there will be artifacts as movements are initiated and changed.&nbsp; It is always better to not have a problem than to mitigate it, so true latency reduction should be aggressively pursued, leaving extrapolation to smooth out sensor jitter issues and perform only a small amount of prediction.</p>
<p><b>Data collection</b></p>
<p>It is not usually possible to introspectively measure the complete system latency of a VR system, because the sensors and display devices external to the host processor make significant contributions to the total latency.&nbsp; An effective technique is to record high speed video that simultaneously captures the initiating physical motion and the eventual display update.&nbsp; The system latency can then be determined by single stepping the video and counting the number of video frames between the two events.</p>
<p>In most cases there will be a significant jitter in the resulting timings due to aliasing between sensor rates, display rates, and camera rates, but conventional applications tend to display total latencies in the dozens of 240 fps video frames.</p>
<p>On an unloaded Windows 7 system with the compositing Aero desktop interface disabled, a gaming mouse dragging a window displayed on a 180 hz CRT monitor can show a response on screen in the same 240 fps video frame that the mouse was seen to first move, demonstrating an end to end latency below four milliseconds.&nbsp; Many systems need to cooperate for this to happen: The mouse updates 500 times a second, with no filtering or buffering.&nbsp; The operating system immediately processes the update, and immediately performs GPU accelerated rendering directly to the framebuffer without any page flipping or buffering.&nbsp; The display accepts the video signal with no buffering or processing, and the screen phosphors begin emitting new photons within microseconds.</p>
<p>In a typical VR system, many things go far less optimally, sometimes resulting in end to end latencies of over 100 milliseconds.</p>
<p><b>Sensors</b></p>
<p>Detecting a physical action can be as simple as a watching a circuit close for a button press, or as complex as analyzing a live video feed to infer position and orientation.</p>
<p>In the old days, executing an IO port input instruction could directly trigger an analog to digital conversion on an ISA bus adapter card, giving a latency on the order of a microsecond and no sampling jitter issues.&nbsp; Today, sensors are systems unto themselves, and may have internal pipelines and queues that need to be traversed before the information is even put on the USB serial bus to be transmitted to the host.</p>
<p>Analog sensors have an inherent tension between random noise and sensor bandwidth, and some combination of analog and digital filtering is usually done on a signal before returning it.&nbsp; Sometimes this filtering is excessive, which can contribute significant latency and remove subtle motions completely.</p>
<p>Communication bandwidth delay on older serial ports or wireless links can be significant in some cases.&nbsp; If the sensor messages occupy the full bandwidth of a communication channel, latency equal to the repeat time of the sensor is added simply for transferring the message.&nbsp; Video data streams can stress even modern wired links, which may encourage the use of data compression, which usually adds another full frame of latency if not explicitly implemented in a pipelined manner.</p>
<p>Filtering and communication are constant delays, but the discretely packetized nature of most sensor updates introduces a variable latency, or “jitter” as the sensor data is used for a video frame rate that differs from the sensor frame rate.&nbsp; This latency ranges from close to zero if the sensor packet arrived just before it was queried, up to the repeat time for sensor messages.&nbsp; Most USB HID devices update at 125 samples per second, giving a jitter of up to 8 milliseconds, but it is possible to receive 1000 updates a second from some USB hardware.&nbsp; The operating system may impose an additional random delay of up to a couple milliseconds between the arrival of a message and a user mode application getting the chance to process it, even on an unloaded system.</p>
<p><b>Displays</b></p>
<p>On old CRT displays, the voltage coming out of the video card directly modulated the voltage of the electron gun, which caused the screen phosphors to begin emitting photons a few microseconds after a pixel was read from the frame buffer memory.</p>
<p>Early LCDs were notorious for “ghosting” during scrolling or animation, still showing traces of old images many tens of milliseconds after the image was changed, but significant progress has been made in the last two decades.&nbsp; The transition times for LCD pixels vary based on the start and end values being transitioned between, but a good panel today will have a switching time around ten milliseconds, and optimized displays for active 3D and gaming can have switching times less than half that.</p>
<p>Modern displays are also expected to perform a wide variety of processing on the incoming signal before they change the actual display elements.&nbsp; A typical Full HD display today will accept 720p or interlaced composite signals and convert them to the 1920×1080 physical pixels.&nbsp; 24 fps movie footage will be converted to 60 fps refresh rates.&nbsp; Stereoscopic input may be converted from side-by-side, top-down, or other formats to frame sequential for active displays, or interlaced for passive displays.&nbsp; Content protection may be applied.&nbsp; Many consumer oriented displays have started applying motion interpolation and other sophisticated algorithms that require multiple frames of buffering.</p>
<p>Some of these processing tasks could be handled by only buffering a single scan line, but some of them fundamentally need one or more full frames of buffering, and display vendors have tended to implement the general case without optimizing for the cases that could be done with low or no delay.&nbsp; Some consumer displays wind up buffering three or more frames internally, resulting in 50 milliseconds of latency even when the input data could have been fed directly into the display matrix.</p>
<p>Some less common display technologies have speed advantages over LCD panels; OLED pixels can have switching times well under a millisecond, and laser displays are as instantaneous as CRTs.</p>
<p>A subtle latency point is that most displays present an image incrementally as it is scanned out from the computer, which has the effect that the bottom of the screen changes 16 milliseconds later than the top of the screen on a 60 fps display.&nbsp; This is rarely a problem on a static display, but on a head mounted display it can cause the world to appear to shear left and right, or “waggle” as the head is rotated, because the source image was generated for an instant in time, but different parts are presented at different times.&nbsp; This effect is usually masked by switching times on LCD HMDs, but it is obvious with fast OLED HMDs.</p>
<p><b>Host processing</b></p>
<p>The classic processing model for a game or VR application is:</p>
<p>Read user input -&gt; run simulation -&gt; issue rendering commands -&gt; graphics drawing -&gt; wait for vsync -&gt; scanout</p>
<p>I = Input sampling and dependent calculation<br>
S = simulation / game execution<br>
R = rendering engine<br>
G = GPU drawing time<br>
V = video scanout time</p>
<p>All latencies are based on a frame time of roughly 16 milliseconds, a progressively scanned display, and zero sensor and pixel latency.</p>
<p>If the performance demands of the application are well below what the system can provide, a straightforward implementation with no parallel overlap will usually provide fairly good latency values.&nbsp; However, if running synchronized to the video refresh, the minimum latency will still be 16 ms even if the system is infinitely fast. &nbsp;&nbsp;This rate feels good for most eye-hand tasks, but it is still a perceptible lag that can be felt in a head mounted display, or in the responsiveness of a mouse cursor.</p>
<pre>Ample performance, vsync:
ISRG------------|VVVVVVVVVVVVVVVV|
.................. latency 16 – 32 milliseconds</pre>
<p>Running without vsync on a very fast system will deliver better latency, but only over a fraction of the screen, and with visible tear lines.&nbsp; The impact of the tear lines are related to the disparity between the two frames that are being torn between, and the amount of time that the tear lines are visible. &nbsp;Tear lines look worse on a continuously illuminated LCD than on a CRT or laser projector, and worse on a 60 fps display than a 120 fps display.&nbsp; Somewhat counteracting that, slow switching LCD panels blur the impact of the tear line relative to the faster displays.</p>
<p>If enough frames were rendered such that each scan line had a unique image, the effect would be of a “rolling shutter”, rather than visible tear lines, and the image would feel continuous.&nbsp; Unfortunately, even rendering 1000 frames a second, giving approximately 15 bands on screen separated by tear lines, is still quite objectionable on fast switching displays, and few scenes are capable of being rendered at that rate, let alone 60x higher for a true rolling shutter on a 1080P display.</p>
<pre>Ample performance, unsynchronized:
ISRG
VVVVV
..... latency 5 – 8 milliseconds at ~200 frames per second</pre>
<p>In most cases, performance is a constant point of concern, and a parallel pipelined architecture is adopted to allow multiple processors to work in parallel instead of sequentially.&nbsp; Large command buffers on GPUs can buffer an entire frame of drawing commands, which allows them to overlap the work on the CPU, which generally gives a significant frame rate boost at the expense of added latency.</p>
<pre>CPU:ISSSSSRRRRRR----|
GPU:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |GGGGGGGGGGG----|
VID:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
    .................................. latency 32 – 48 milliseconds</pre>
<p>When the CPU load for the simulation and rendering no longer fit in a single frame, multiple CPU cores can be used in parallel to produce more frames.&nbsp; It is possible to reduce frame execution time without increasing latency in some cases, but the natural split of simulation and rendering has often been used to allow effective pipeline parallel operation.&nbsp; Work queue approaches buffered for maximum overlap can cause an additional frame of latency if they are on the critical user responsiveness path.</p>
<pre>CPU1:ISSSSSSSS-------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |RRRRRRRRR-------|
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |GGGGGGGGGG------|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
     .................................................... latency 48 – 64 milliseconds</pre>
<p>Even if an application is running at a perfectly smooth 60 fps, it can still have host latencies of over 50 milliseconds, and an application targeting 30 fps could have twice that. &nbsp;&nbsp;Sensor and display latencies can add significant additional amounts on top of that, so the goal of 20 milliseconds motion-to-photons latency is challenging to achieve.</p>
<p><b>Latency Reduction Strategies</b></p>
<p><b>Prevent GPU buffering</b></p>
<p>The drive to win frame rate benchmark wars has led driver writers to aggressively buffer drawing commands, and there have even been cases where drivers ignored explicit calls to glFinish() in the name of improved “performance”.&nbsp; Today’s fence primitives do appear to be reliably observed for drawing primitives, but the semantics of buffer swaps are still worryingly imprecise.&nbsp; A recommended sequence of commands to synchronize with the vertical retrace and idle the GPU is:</p>
<p>SwapBuffers();<br>
DrawTinyPrimitive();<br>
InsertGPUFence();<br>
BlockUntilFenceIsReached();</p>
<p>While this should always prevent excessive command buffering on any conformant driver, it could conceivably fail to provide an accurate vertical sync timing point if the driver was transparently implementing triple buffering.</p>
<p>To minimize the performance impact of synchronizing with the GPU, it is important to have sufficient work ready to send to the GPU immediately after the synchronization is performed.&nbsp; The details of exactly when the GPU can begin executing commands are platform specific, but execution can be explicitly kicked off with glFlush() or equivalent calls.&nbsp; If the code issuing drawing commands does not proceed fast enough, the GPU may complete all the work and go idle with a “pipeline bubble”.&nbsp; Because the CPU time to issue a drawing command may have little relation to the GPU time required to draw it, these pipeline bubbles may cause the GPU to take noticeably longer to draw the frame than if it were completely buffered.&nbsp; Ordering the drawing so that larger and slower operations happen first will provide a cushion, as will pushing as much preparatory work as possible before the synchronization point.</p>
<pre>Run GPU with minimal buffering:
CPU1:ISSSSSSSS-------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |RRRRRRRRR-------|
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |-GGGGGGGGGG-----|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
     ................................... latency 32 – 48 milliseconds</pre>
<p>Tile based renderers, as are found in most mobile devices, inherently require a full scene of command buffering before they can generate their first tile of pixels, so synchronizing before issuing any commands will destroy far more overlap.&nbsp; In a modern rendering engine there may be multiple scene renders for each frame to handle shadows, reflections, and other effects, but increased latency is still a fundamental drawback of the technology.</p>
<p>High end, multiple GPU systems today are usually configured for AFR, or Alternate Frame Rendering, where each GPU is allowed to take twice as long to render a single frame, but the overall frame rate is maintained because there are two GPUs producing frames</p>
<pre>Alternate Frame Rendering dual GPU:
CPU1:IOSSSSSSS-------|IOSSSSSSS-------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |RRRRRRRRR-------|RRRRRRRRR-------|
GPU1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | GGGGGGGGGGGGGGGGGGGGGGGG--------|
GPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | GGGGGGGGGGGGGGGGGGGGGGG---------|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
     .................................................... latency 48 – 64 milliseconds</pre>
<p>Similarly to the case with CPU workloads, it is possible to have two or more GPUs cooperate on a single frame in a way that delivers more work in a constant amount of time, but it increases complexity and generally delivers a lower total speedup.</p>
<p>An attractive direction for stereoscopic rendering is to have each GPU on a dual GPU system render one eye, which would deliver maximum performance and minimum latency, at the expense of requiring the application to maintain buffers across two independent rendering contexts.</p>
<p>The downside to preventing GPU buffering is that throughput performance may drop, resulting in more dropped frames under heavily loaded conditions.</p>
<p><b>Late frame scheduling</b></p>
<p>Much of the work in the simulation task does not depend directly on the user input, or would be insensitive to a frame of latency in it.&nbsp; If the user processing is done last, and the input is sampled just before it is needed, rather than stored off at the beginning of the frame, the total latency can be reduced.</p>
<p>It is very difficult to predict the time required for the general simulation work on the entire world, but the work just for the player’s view response to the sensor input can be made essentially deterministic.&nbsp; If this is split off from the main simulation task and delayed until shortly before the end of the frame, it can remove nearly a full frame of latency.</p>
<pre>Late frame scheduling:
CPU1:SSSSSSSSS------I|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |RRRRRRRRR-------|
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |-GGGGGGGGGG-----|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
                    .................... latency 18 – 34 milliseconds</pre>
<p>Adjusting the view is the most latency sensitive task; actions resulting from other user commands, like animating a weapon or interacting with other objects in the world, are generally insensitive to an additional frame of latency, and can be handled in the general simulation task the following frame.</p>
<p>The drawback to late frame scheduling is that it introduces a tight scheduling requirement that usually requires busy waiting to meet, wasting power.&nbsp; If your frame rate is determined by the video retrace rather than an arbitrary time slice, assistance from the graphics driver in accurately determining the current scanout position is helpful.</p>
<p><b>View bypass</b></p>
<p>An alternate way of accomplishing a similar, or slightly greater latency reduction Is to allow the rendering code to modify the parameters delivered to it by the game code, based on a newer sampling of user input.</p>
<p>At the simplest level, the user input can be used to calculate a delta from the previous sampling to the current one, which can be used to modify the view matrix that the game submitted to the rendering code.</p>
<p>Delta processing in this way is minimally intrusive, but there will often be situations where the user input should not affect the rendering, such as cinematic cut scenes or when the player has died.&nbsp; It can be argued that a game designed from scratch for virtual reality should avoid those situations, because a non-responsive view in a HMD is disorienting and unpleasant, but conventional game design has many such cases.</p>
<p>A binary flag could be provided to disable the bypass calculation, but it is useful to generalize such that the game provides an object or function with embedded state that produces rendering parameters from sensor input data instead of having the game provide the view parameters themselves.&nbsp; In addition to handling the trivial case of ignoring sensor input, the generator function can incorporate additional information such as a head/neck positioning model that modified position based on orientation, or lists of other models to be positioned relative to the updated view.</p>
<p>If the game and rendering code are running in parallel, it is important that the parameter generation function does not reference any game state to avoid race conditions.</p>
<pre>View bypass:
CPU1:ISSSSSSSSS------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |IRRRRRRRRR------|
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |--GGGGGGGGGG----|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
                      .................. latency 16 – 32 milliseconds</pre>
<p>The input is only sampled once per frame, but it is simultaneously used by both the simulation task and the rendering task.&nbsp; Some input processing work is now duplicated by the simulation task and the render task, but it is generally minimal.</p>
<p>The latency for parameters produced by the generator function is now reduced, but other interactions with the world, like muzzle flashes and physics responses, remain at the same latency as the standard model.</p>
<p>A modified form of view bypass could allow tile based GPUs to achieve similar view latencies to non-tiled GPUs, or allow non-tiled GPUs to achieve 100% utilization without pipeline bubbles by the following steps:</p>
<p>Inhibit the execution of GPU commands, forcing them to be buffered.&nbsp; OpenGL has only the deprecated display list functionality to approximate this, but a control extension could be formulated.</p>
<p>All calculations that depend on the view matrix must reference it independently from a buffer object, rather than from inline parameters or as a composite model-view-projection (MVP) matrix.</p>
<p>After all commands have been issued and the next frame has started, sample the user input, run it through the parameter generator, and put the resulting view matrix into the buffer object for referencing by the draw commands.</p>
<p>Kick off the draw command execution.</p>
<pre>Tiler optimized view bypass:
CPU1:ISSSSSSSSS------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |IRRRRRRRRRR-----|I
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |-GGGGGGGGGG-----|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
                                       .................. latency 16 – 32 milliseconds</pre>
<p>Any view frustum culling that was performed to avoid drawing some models may be invalid if the new view matrix has changed substantially enough from what was used during the rendering task.&nbsp; This can be mitigated at some performance cost by using a larger frustum field of view for culling, and hardware clip planes based on the culling frustum limits can be used to guarantee a clean edge if necessary.&nbsp; Occlusion errors from culling, where a bright object is seen that should have been occluded by an object that was incorrectly culled, are very distracting, but a temporary clean encroaching of black at a screen edge during rapid rotation is almost unnoticeable.</p>
<p><b>Time warping</b></p>
<p>If you had perfect knowledge of how long the rendering of a frame would take, some additional amount of latency could be saved by late frame scheduling the entire rendering task, but this is not practical due to the wide variability in frame rendering times.</p>
<pre>Late frame input sampled view bypass:
CPU1:ISSSSSSSSS------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |----IRRRRRRRRR--|
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |------GGGGGGGGGG|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
                          .............. latency 12 – 28 milliseconds</pre>
<p>However, a post processing task on the rendered image can be counted on to complete in a fairly predictable amount of time, and can be late scheduled more easily.&nbsp; Any pixel on the screen, along with the associated depth buffer value, can be converted back to a world space position, which can be re-transformed to a different screen space pixel location for a modified set of view parameters.</p>
<p>After drawing a frame with the best information at your disposal, possibly with bypassed view parameters, instead of displaying it directly, fetch the latest user input, generate updated view parameters, and calculate a transformation that warps the rendered image into a position that approximates where it would be with the updated parameters.&nbsp; Using that transform, warp the rendered image into an updated form on screen that reflects the new input.&nbsp; If there are two dimensional overlays present on the screen that need to remain fixed, they must be drawn or composited in after the warp operation, to prevent them from incorrectly moving as the view parameters change.</p>
<pre>Late frame scheduled time warp:
CPU1:ISSSSSSSSS------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |RRRRRRRRRR----IR|
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |-GGGGGGGGGG----G|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
                                    .... latency 2 – 18 milliseconds</pre>
<p>If the difference between the view parameters at the time of the scene rendering and the time of the final warp is only a change in direction, the warped image can be almost exactly correct within the limits of the image filtering.&nbsp; Effects that are calculated relative to the screen, like depth based fog (versus distance based fog) and billboard sprites will be slightly different, but not in a manner that is objectionable.</p>
<p>If the warp involves translation as well as direction changes, geometric silhouette edges begin to introduce artifacts where internal parallax would have revealed surfaces not visible in the original rendering.&nbsp; A scene with no silhouette edges, like the inside of a box, can be warped significant amounts and display only changes in texture density, but translation warping realistic scenes will result in smears or gaps along edges.&nbsp; In many cases these are difficult to notice, and they always disappear when motion stops, but first person view hands and weapons are a prominent case.&nbsp; This can be mitigated by limiting the amount of translation warp, compressing or making constant the depth range of the scene being warped to limit the dynamic separation, or rendering the disconnected near field objects as a separate plane, to be composited in after the warp.</p>
<p>If an image is being warped to a destination with the same field of view, most warps will leave some corners or edges of the new image undefined, because none of the source pixels are warped to their locations.&nbsp; This can be mitigated by rendering a larger field of view than the destination requires; but simply leaving unrendered pixels black is surprisingly unobtrusive, especially in a wide field of view HMD.</p>
<p>A forward warp, where source pixels are deposited in their new positions, offers the best accuracy for arbitrary transformations.&nbsp; At the limit, the frame buffer and depth buffer could be treated as a height field, but millions of half pixel sized triangles would have a severe performance cost.&nbsp; Using a grid of triangles at some fraction of the depth buffer resolution can bring the cost down to a very low level, and the trivial case of treating the rendered image as a single quad avoids all silhouette artifacts at the expense of incorrect pixel positions under translation.</p>
<p>Reverse warping, where the pixel in the source rendering is estimated based on the position in the warped image, can be more convenient because it is implemented completely in a fragment shader.&nbsp; It can produce identical results for simple direction changes, but additional artifacts near geometric boundaries are introduced if per-pixel depth information is considered, unless considerable effort is expended to search a neighborhood for the best source pixel.</p>
<p>If desired, it is straightforward to incorporate motion blur in a reverse mapping by taking several samples along the line from the pixel being warped to the transformed position in the source image.</p>
<p>Reverse mapping also allows the possibility of modifying the warp through the video scanout.&nbsp; The view parameters can be predicted ahead in time to when the scanout will read the bottom row of pixels, which can be used to generate a second warp matrix.&nbsp; The warp to be applied can be interpolated between the two of them based on the pixel row being processed.&nbsp; This can correct for the “waggle” effect on a progressively scanned head mounted display, where the 16 millisecond difference in time between the display showing the top line and bottom line results in a perceived shearing of the world under rapid rotation on fast switching displays.</p>
<p><b>Continuously updated time warping</b></p>
<p>If the necessary feedback and scheduling mechanisms are available, instead of predicting what the warp transformation should be at the bottom of the frame and warping the entire screen at once, the warp to screen can be done incrementally while continuously updating the warp matrix as new input arrives.</p>
<pre>Continuous time warp:
CPU1:ISSSSSSSSS------|
CPU2:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |RRRRRRRRRRR-----|
GPU :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |-GGGGGGGGGGGG---|
WARP:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; W| W W W W W W W W|
VID :&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |VVVVVVVVVVVVVVVV|
                                     ... latency 2 – 3 milliseconds for 500hz sensor updates</pre>
<p>The ideal interface for doing this would be some form of “scanout shader” that would be called “just in time” for the video display.&nbsp; Several video game systems like the Atari 2600, Jaguar, and Nintendo DS have had buffers ranging from half a scan line to several scan lines that were filled up in this manner.</p>
<p>Without new hardware support, it is still possible to incrementally perform the warping directly to the front buffer being scanned for video, and not perform a swap buffers operation at all.</p>
<p>A CPU core could be dedicated to the task of warping scan lines at roughly the speed they are consumed by the video output, updating the time warp matrix each scan line to blend in the most recently arrived sensor information.</p>
<p>GPUs can perform the time warping operation much more efficiently than a conventional CPU can, but the GPU will be busy drawing the next frame during video scanout, and GPU drawing operations cannot currently be scheduled with high precision due to the difficulty of task switching the deep pipelines and extensive context state.&nbsp; However, modern GPUs are beginning to allow compute tasks to run in parallel with graphics operations, which may allow a fraction of a GPU to be dedicated to performing the warp operations as a shared parameter buffer is updated by the CPU.</p>
<p><b>Discussion</b></p>
<p>View bypass and time warping are complementary techniques that can be applied independently or together.&nbsp; Time warping can warp from a source image at an arbitrary view time / location to any other one, but artifacts from internal parallax and screen edge clamping are reduced by using the most recent source image possible, which view bypass rendering helps provide.</p>
<p>Actions that require simulation state changes, like flipping a switch or firing a weapon, still need to go through the full pipeline for 32 – 48 milliseconds of latency based on what scan line the result winds up displaying on the screen, and translational information may not be completely faithfully represented below the 16 – 32 milliseconds of the view bypass rendering, but the critical head orientation feedback can be provided in 2 – 18 milliseconds on a 60 hz display.&nbsp; In conjunction with low latency sensors and displays, this will generally be perceived as immediate.&nbsp; Continuous time warping opens up the possibility of latencies below 3 milliseconds, which may cross largely unexplored thresholds in human / computer interactivity.</p>
<p>Conventional computer interfaces are generally not as latency demanding as virtual reality, but sensitive users can tell the difference in mouse response down to the same 20 milliseconds or so, making it worthwhile to apply these techniques even in applications without a VR focus.</p>
<p>A particularly interesting application is in “cloud gaming”, where a simple client appliance or application forwards control information to a remote server, which streams back real time video of the game.&nbsp; This offers significant convenience benefits for users, but the inherent network and compression latencies makes it a lower quality experience for action oriented titles.&nbsp; View bypass and time warping can both be performed on the server, regaining a substantial fraction of the latency imposed by the network.&nbsp; If the cloud gaming client was made more sophisticated, time warping could be performed locally, which could theoretically reduce the latency to the same levels as local applications, but it would probably be prudent to restrict the total amount of time warping to perhaps 30 or 40 milliseconds to limit the distance from the source images.</p>
<p><b>Acknowledgements</b></p>
<p>Zenimax for allowing me to publish this openly.</p>
<p>Hillcrest Labs for inertial sensors and experimental firmware.</p>
<p>Emagin for access to OLED displays.</p>
<p>Oculus for a prototype Rift HMD.</p>
<p>Nvidia for an experimental driver with access to the current scan line number.</p>
          </div>]]></content:encoded></item><item><title><![CDATA[The World map of C++ STL Algorithms]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I just found this little gem by watching this video by Jonathan Boccara</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/bXkWuUe9V2I" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<p>Within it you will find the world map of STL algorithms. A handy thing to hang on your wall :-). You can get the map from the <a href="https://www.fluentcpp.com/getthemap/">fluentcpp website</a>. But if you don't fancy subscribing to a</p></div>]]></description><link>http://www.sevangelatos.com/the-world-map-of-c-stl-algorithms/</link><guid isPermaLink="false">5bc1a980b21cb5000187a6a8</guid><category><![CDATA[C++]]></category><category><![CDATA[Programming]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sat, 13 Oct 2018 08:53:58 GMT</pubDate><media:content url="http://www.sevangelatos.com/content/images/2018/10/ezgif.com-gif-maker.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://www.sevangelatos.com/content/images/2018/10/ezgif.com-gif-maker.jpg" alt="The World map of C++ STL Algorithms"><p>I just found this little gem by watching this video by Jonathan Boccara</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/bXkWuUe9V2I" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<p>Within it you will find the world map of STL algorithms. A handy thing to hang on your wall :-). You can get the map from the <a href="https://www.fluentcpp.com/getthemap/">fluentcpp website</a>. But if you don't fancy subscribing to a mailing list just to get a glance at it, here is a lower resolution version of the map for your enjoyment.<br>
<img src="http://www.sevangelatos.com/content/images/2018/10/world_map_of_cpp_STL_algorithms_thumb.jpg" alt="The World map of C++ STL Algorithms"><br>
<a href="http://www.sevangelatos.com/content/images/2018/10/world_map_of_cpp_STL_algorithms.jpg">download</a></p>
</div>]]></content:encoded></item><item><title><![CDATA[The pure bash bible]]></title><description><![CDATA[<div class="kg-card-markdown"><p>The <a href="https://github.com/dylanaraps/pure-bash-bible/blob/master/README.md">pure bash bible</a> is such an impressive resource of bash tips and tricks catalogued in a very readable way, with excellent examples.</p>
<p>It shows how much you can do with bash built-in methods. Even though I am a bit split on how much these should be used, as most</p></div>]]></description><link>http://www.sevangelatos.com/the-pure-bash-bible/</link><guid isPermaLink="false">5b374e92e7d80100018ad3d6</guid><category><![CDATA[Linux]]></category><category><![CDATA[Programming]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sat, 30 Jun 2018 09:43:29 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1473662711507-13345f9d447c?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=cfa115af36c2327739c4a44bb01ba892" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1473662711507-13345f9d447c?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ&s=cfa115af36c2327739c4a44bb01ba892" alt="The pure bash bible"><p>The <a href="https://github.com/dylanaraps/pure-bash-bible/blob/master/README.md">pure bash bible</a> is such an impressive resource of bash tips and tricks catalogued in a very readable way, with excellent examples.</p>
<p>It shows how much you can do with bash built-in methods. Even though I am a bit split on how much these should be used, as most of these will make your scripts bash-only.</p>
<p>On the other hand, these could be a life saver if you are working with bash without a full unix environment. For example when working in some embedded low storage system, or when working with git-bash in windows. The optimization of not forking a new process for no reason, is also a nice bonus.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Words of wisdom from the trenches of KDE]]></title><description><![CDATA[<div class="kg-card-markdown"><p>If you are programming with Qt, or even C++ in general, <a href="https://techbase.kde.org/Development/Tutorials/Common_Programming_Mistakes">the KDE project has some advice</a> for you. I found that I frequently stumbled in some of these issues when working with Qt code.</p>
</div>]]></description><link>http://www.sevangelatos.com/words-of-wisdom-from-the/</link><guid isPermaLink="false">5b374d29e7d80100018ad3cc</guid><category><![CDATA[C++]]></category><category><![CDATA[KDE]]></category><category><![CDATA[Programming]]></category><category><![CDATA[Qt]]></category><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Sat, 30 Jun 2018 09:33:28 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1520315749-b79e33416df9?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=44a44631da1d532d256d1eb08c9b58b8" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1520315749-b79e33416df9?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ&s=44a44631da1d532d256d1eb08c9b58b8" alt="Words of wisdom from the trenches of KDE"><p>If you are programming with Qt, or even C++ in general, <a href="https://techbase.kde.org/Development/Tutorials/Common_Programming_Mistakes">the KDE project has some advice</a> for you. I found that I frequently stumbled in some of these issues when working with Qt code.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Are your most used keys fading?]]></title><description><![CDATA[<div class="kg-card-markdown"><p><img src="http://www.sevangelatos.com/content/images/2018/06/clearcoat-1.jpg" alt="clearcoat-1"><br>
Well, just apply some automotive clearcoat. Let's see how that will work.</p>
</div>]]></description><link>http://www.sevangelatos.com/are-your-most-used-keys-fading/</link><guid isPermaLink="false">5b2ccfdce7d80100018ad3c4</guid><dc:creator><![CDATA[Spiros Evangelatos]]></dc:creator><pubDate>Fri, 22 Jun 2018 10:35:37 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1516415372068-d139ad9301ae?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=16c8e622ebdfaef108994372d9800959" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1516415372068-d139ad9301ae?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ&s=16c8e622ebdfaef108994372d9800959" alt="Are your most used keys fading?"><p><img src="http://www.sevangelatos.com/content/images/2018/06/clearcoat-1.jpg" alt="Are your most used keys fading?"><br>
Well, just apply some automotive clearcoat. Let's see how that will work.</p>
</div>]]></content:encoded></item></channel></rss>