Lately I wanted to change the way I use my free time. I define my free time as: not working, not sleeping, not eating. So, I estimate it to six hours a day in work day and fourteen hours in non worked day.
With the year 2020 being quite unusual, I was staying at home most of the time without seeing the time passing. At the end of the year, I started to mix the duration of weeks and months which disturbed me a lot.
For a a few weeks now, I started to change the way I spend my free time. I thought it was be nice to have a few separate activies in the same day to help me realizing how time is passing by.
Activity list
Here is the way I chose to distribute my free time. It's not a strict approach, I measure nothing. But I try to keep a simple ratio of 3/6, 2/6 and 1/6.
Recreation: 3/6
I spend a lot of time in recreation time. A few activies I've put into recreation:
- video games
- movies
- reading novels
- sports
Creativity: 2/6
Those activies requires creativy, work and knowledge:
- writing code
- reading technical books
- playing music
- creating content (texts, video, audio etc..)
Chores: 1/6
Yes, obviously this has to be done on free time... And it's always better to do it a bit everyday than accumulating it until you are forced to proceed.
Conclusion
I only started for a few weeks now but I really enjoy doing it. As I said previously, it's not something I stricly apply, but more a general way to spend my time and not stick for six hours writing code in a row from after work to going to sleep. I really feel my life is better balanced now and I feel some accomplishments for the few activies done every day.
Questions / Answers
Some asked asked me if I was planning in advance how I spend my time.
The answer is no. I don't plan anything but when I tend to lose focus on what I'm doing (and this happen often), I think about this time repartition method and then I think it may be time to jump on another activity and I pick something in another category. Now I think about it, that was very often that I was doing something because I was bored and lacking idea of activities to occupy myself, with this current list I no longer have this issue.
I don't often give my own opinion on this blog but I really feel it is important here.
The matter is about ecology, fair money distribution and civilization. I feel I need to share a bit about my lifestyle, maybe it will have an impact of some of my readers (a good one in reads to the Great Good I hope). I really think one person can make a change. I changed myself, only by spending a few moments with a member of my family a few years ago. That person never tried to convince me of anything, they only lived by their own standard without never offending me, it was simple things, nothing that would make that person a paria in our society. But I got curious about the reasons and I figurated it myself way later, now I understand why.
My philisophy is simple. In a life in modern civilization where everything is going fast, everyone cares about opinions other have about them and ultra communication, step back.
Here are the various statement I am following, this is something I self defined, it's not absolute rules.
- Be yourself and be prepare to assume who you are. If you don't have the latest gadget you are not "has been", if you don't live in a giant house, you didn't fail your career, if you don't have a top notch shiny car nobody should ever care.
- Reuse what you have. It's not because a cloth has a little scratch that you can't reuse it. It's not because an electronic device is old that you should replace it.
- Opensource is a great way to revive old computers
- Reduce your food waste to 0 and eat less meat because to feed animals we eat this requires a huge food production, more than what we finally eat in the meat
- Travel less, there are a lot to see around where I live than at the other side of the planet. Certainly not go on vacation far away from home only to enjoy a beach under the sun. This also mean no car if it can be avoided, and if I use a car, why not carpooling?
- Avoid gadgets (electronic devices that bring nothing useful) at all cost. Buy good gears (kitchen tools, workshop tools, furnitures etc...) that can be repaired. If possible buy second hand. For non-essential gears, second hand is mandatory.
- In winter, heat at 19°C maximum with warm clothes while at home.
- In summer, no A/C but use of extern isolation and vines along the home to help cooling down. And fans + water while wearing lights clothes to keep cool.
While some people are looking for more and more, I do seek for less. There are not enough for everyone on the planet, so it's important to make sacrifices.
Of course, it is how I am and I don't expect anyone to apply this, that would be insane :)
Be safe and enjoy this new year! <3
Lowtech Magazine, articles about doing things using simple technology
On me pose souvent la question sur la façon dont je publie mon blog, comment j'écris mes textes et comment ils sont publiés sur trois médias différents. Cet article est l'occasion pour moi de répondre à ces questions.
Pour mes publications j'utilise le générateur de site statique "cl-yag" que j'ai développé. Son principal travail est de générer les fichiers d'index d'accueil et de chaque tags pour chacun des médias de diffusion, HTML pour http, gophermap pour gopher et gemtext pour gemini. Après la génération des indexs, pour chaque article publié en HTML, un convertisseur va être appelé pour transformer le fichier d'origine en HTML afin de permettre sa consultation avec un navigateur internet. Pour gemini et gopher, l'article source est simplement copié avec quelques méta-données ajoutées en haut du fichier comme le titre, la date, l'auteur et les mots-clés.
Publier sur ces trois format en même temps avec un seul fichier source est un défi qui requiert malheureusement de faire des sacrifices sur le rendu si on ne veut pas écrire trois versions du même texte. Pour gopher, j'ai choisi de distribuer les textes tel quel, en tant que fichier texte, le contenu peut être du markdown, org-mode, mandoc ou autre mais gopher ne permet pas de le déterminer. Pour gémini, les textes sont distribués comme .gmi qui correspondent au type gemtext même si les anciennes publications sont du markdown pour le contenu. Pour le http, c'est simplement du HTML obtenu via une commande en fonction du type de données en entrée.
J'ai récemment décidé d'utiliser le format gemtext par défaut plutôt que le markdown pour écrire mes articles. Il a certes moins de possibilités que le markdown, mais le rendu ne contient aucune ambiguïté, tandis que le rendu d'un markdown peut varier selon l'implémentation et le type de markdown (tableaux, pas tableaux ? Syntaxe pour les images ? etc...)
Lors de l'exécution du générateur de site, tous les indexs sont régénérées, pour les fichiers publiés, la date de modification de celui-ci est comparée au fichier source, si la source est plus récente alors le fichier publié est généré à nouveau car il y a eu un changement. Cela permet de gagner énormément de temps puisque mon site atteint bientôt les 200 articles et copier 200 fichiers pour gopher, 200 pour gemini et lancer 200 programmes de conversion pour le HTML rendrait la génération extrêmement longue.
Après la génération de tous les fichiers, la commande rsync est utilisée pour mettre à jour les dossiers de sortie pour chaque protocole vers le serveur correspondant. J'utilise un serveur pour le http, deux serveurs pour gopher (le principal n'était pas spécialement stable à l'époque), un serveur pour gemini.
J'ai ajouté un système d'annonce sur Mastodon en appelant le programme local "toot" configuré sur un compte dédié. Ces changements n'ont pas été déployé dans cl-yag car il s'agit de changements très spécifiques pour mon utilisation personnelle. Ce genre de modification me fait penser qu'un générateur de site statique peut être un outil très personnel que l'on configure vraiment pour un besoin hyper spécifique et qu'il peut être difficile pour quelqu'un d'autre de s'en servir. J'avais décidé de le publier à l'époque, je ne sais pas si quelqu'un l'utilise activement, mais au moins le code est là pour les plus téméraires qui voudraient y jeter un oeil.
Mon générateur de blog peut supporter le mélange de différents types de fichiers sources pour être convertis en HTML. Cela me permet d'utiliser le type de formatage que je veux sans avoir à tout refaire.
Voici quelques commandes utilisées pour convertir les fichiers d'entrées (les articles bruts tels que je les écrits) en HTML. On constate que la conversion org-mode vers HTML n'est pas la plus simple. Le fichier de configuration de cl-yag est du code LISP chargé lors de l'exécution, je peux y mettre des commentaires mais aussi du code si je le souhaite, cela se révèle pratique parfois.
(converter :name :gemini :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown :extension ".md" :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md" :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc :extension ".man"
:command "cat data/%IN | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode :extension ".org"
:command (concatenate 'string
"emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
"(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
"(princ (buffer-string)))' --kill | tee %OUT"))
Quand je déclare un nouvel article dans le fichier de configuration qui détient les méta-données de toutes les publications, j'ai la possibilité de choisir le convertisseur HTML à utiliser si ce n'est pas celui par défaut.
;; utilisation du convertisseur par défaut
(post :title "Minimalistic markdown subset to html converter using awk"
:id "minimal-markdown" :tag "unix awk" :date "20190826")
;; utilisation du convertisseur mmd, un script awk très simple que j'ai fait pour convertir quelques fonctionnalités de markdown en html
(post :title "Life with an offline laptop"
:id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)
Quelques statistiques concernant la syntaxe de mes différentes publications, via http vous ne voyez que le HTML, mais en gopher ou gemini vous verrez la source telle quelle.
- markdown :: 183
- gemini :: 12
- mandoc :: 4
- mmd :: 2
- org-mode :: 1
I often have questions about how I write my articles, which format I use and how I publish on various medias. This article is the opportunity to highlight all the process.
So, I use my own static generator cl-yag which supports generating indexes for whole article lists but also for every tags in html, gophermap format and gemini gemtext. After the generation of indexes, for html every article will be converted into html by running a "converter" command. For gopher and gemini the original text is picked up, some metadata are added at the top of the file and that's all.
Publishing for all the three formats is complicated and sacrifices must be made if I want to avoid extra work (like writing a version for each). For gopher, I chose to distribute them as simple text file but it can be markdown, org-mode, mandoc or other formats, you can't know. For gemini, it will distribute gemtext format and for http it will be html.
Recently, I decided to switch to gemtext format instead of markdown as the main format for writing new texts, it has a bit less features than markdown, but markdown has some many implementations than the result can differ greatly from one renderer to another.
When I run the generator, all the indexes are regenerated, and destination file modification time are compared to the original file modification time, if the destination file (the gopher/html/gemini file that is published) is newer than the original file, no need to rewrite it, this saves a lot of time. After generation, the Makefile running the program will then run rsync to various servers to publish the new directories. One server has gopher and html, another server only gemini and another server has only gopher as a backup.
I added a Mastodon announcement calling a local script to publish links to new publications on Mastodon, this wasn't merged into cl-yag git repository because it's too custom code depending on local programs. I think a blog generator is as personal as the blog itself, I decided to publish its code at first but I am not sure it makes much sense because nobody may have the same mindset as mine to appropriate this tool, but at least it's available if someone wants to use it.
My blog software can support mixing input format so I am not tied to a specific format for all its life.
Here are the various commands used to convert a file from its original format to html. One can see that converting from org-mode to html in command line isn't an easy task. As my blog software is written in Common LISP, the configuration file is also a valid common lisp file, so I can write some code in it if required.
(converter :name :gemini :extension ".gmi" :command "gmi2html/gmi2html data/%IN | tee %OUT")
(converter :name :markdown :extension ".md" :command "peg-markdown -t html -o %OUT data/%IN")
(converter :name :markdown2 :extension ".md" :command "multimarkdown -t html -o %OUT data/%IN")
(converter :name :mmd :extension ".mmd" :command "cat data/%IN | awk -f mmd | tee %OUT")
(converter :name :mandoc :extension ".man"
:command "cat data/%IN | mandoc -T markdown | sed -e '1,2d' -e '$d' | multimarkdown -t html -o %OUT")
(converter :name :org-mode :extension ".org"
:command (concatenate 'string
"emacs data/%IN --batch --eval '(with-temp-buffer (org-mode) "
"(insert-file \"%IN\") (org-html-export-as-html nil nil nil t)"
"(princ (buffer-string)))' --kill | tee %OUT"))
When I define a new article to generate from a main file holding the metadata, I can specify the converter if it's not the default one configured.
;; using default converter
(post :title "Minimalistic markdown subset to html converter using awk"
:id "minimal-markdown" :tag "unix awk" :date "20190826")
;; using mmd converter, a simple markdown to html converter written in awk
(post :title "Life with an offline laptop"
:id "offline-laptop" :tag "openbsd life disconnected" :date "20190823" :converter :mmd)
Some statistics about the various format used in my blog.
- markdown :: 183
- gemini :: 12
- mandoc :: 4
- mmd :: 2
- org-mode :: 1
In this article I will share my opinion about things I like in OpenBSD, this may including a short rant about recent open source practices not helping non-linux support.
Privacy
There is no telemetry on OpenBSD. It's good for privacy, there is nothing to turn off to disable reporting information because there is no need to.
The default system settings will prevent microphone to record sound and the webcam can't be accessed without user consent because the device is root's by default.
Secure firefox / chromium
While the security features added (pledge and mainly unveil) to the market dominating web browsers can be cumbersome sometimes, this is really a game changer compared to using them on others operating systems.
With those security features enabled (by default) the web browsers are ony able to retrieve files in a few user defined directories like ~/Downloads or /tmp/ by default and some others directories required for the browsers to work.
This means your ~/.ssh or ~/Documents and everything else can't be read by an exploit in a web browser or a malicious extension.
It's possible to replicate this on Linux using AppArmor, but it's absolutely not out of the box and requires a lot of tweaks from the user to get an usable Firefox. I did try, it worked but it requires a very good understanding of the Firefox needs and AppArmor profile syntax to get it to work.
PF firewall
With this firewall, I can quickly check the rules of my desktop or server and understand what they are doing.
I also use a lot the bandwidth management feature to throttle the bandwidth some programs can use which doesn't provide any rate limiting. This is very important to me.
Linux users could use the software such as trickle or wondershaper for this.
It's stable
Apart from the use of some funky hardware, OpenBSD has proven me being very stable and reliable. I can easily reach two weeks of uptime on my desktop with a few suspend/resume every day. My servers are running 24/7 without incident for years.
I rarely go further than two weeks on my workstation because I use the development version -current and I need to upgrade once in a while.
Low maintenance
Keeping my OpenBSD up-to-date is very easy. I run syspatch and pkg_add -u twice a day to keep the system up to date. A release every six months requires a bit of work.
Basically, upgrading every six months looks like this, except some specific instructions explained in the upgrade guide (database server major upgrade for example):
# sysupgrade
[..wait..]
# pkg_add -u
# reboot
Documentation is accurate
Setting up an OpenBSD system with full disk encryption is easy.
Documentation to create a router with NAT is explained step by step.
Every binary or configuration file have their own up-to-date man page.
The FAQ, the website and the man pages should contain everything one needs. This represents a lot of information, it may not be easy to find what you need, but it's there.
If I had to be without internet for some times, I would prefer an OpenBSD system. The embedded documentation (man pages) should help me to achieve what I want.
Consider configuring a router with traffic shaping on OpenBSD and another one with Linux without Internet access. I'd 100% prefer read the PF man page.
Contributing is easy
This has been a hot topic recently. I very enjoy the way OpenBSD manage the contributions. I download the sources on my system, anywhere I want, modify it, generate a diff and I send it on the mailing list. All of this can be done from a console with tools I already use (git/cvs) and email.
There could be an entry barrier for new contributors: you may feel people replying are not kind with you. **This is not true.** If you sent a diff and received critics (reviews) of your code, this means some people spent time to teach you how to improve your work. I do understand some people may feel it rude, but it's not.
This year I modestly contributed to the projects OpenIndiana and NixOS this was the opportunity to compare how contributions are handled. Both those projects use github. The work flow is interesting but understanding it and mastering it is extremely complicated.
OpenIndiana official website
NixOS official website
One has to make a github account, fork the project, create a branch, make the changes for your contribution, commit locally, push on the fork, use the github interface to do a merge request. This is only the short story. On NixOS, my first attempt ended in a pull request involving 6 months of old commits. With good documentation and training, this could be overcome, and I think this method has some advantages like easy continuous integration of the commits and easy review of code, but it's a real entry barrier for new people.
High quality packages
My opinion may be biased on this (even more than for the previous items), but I really think OpenBSD packages quality is very high. Most packages should work out of the box with sane defaults.
Packages requiring specific instructions have a README file installed with them explaining how to setup the service or the quirks that could happen.
Even if we lack some packages due to lack of contributors and time (in addition to some packages relying too much on Linux to be easy to port), major packages are up to date and working very well.
I will take the opportunity of this article to publish a complaint toward the general trend in the Open Source.
- programs distributed only using flatpak / docker / snap are really Linux friendly but this is hostile to non Linux systems. They often make use of linux-only features and the builds systems are made for the linux distribution methods.
- nodeJS programs: they are made out of hundreds or even thousands of libraries often working fragile even on Linux. This is a real pain to get them working on OpenBSD. Some node libraries embed rust programs, some will download a static binary and use it with no fallback solution or will even try to compile source code instead of using that library/binary from the system when installed.
- programs using git to build: our build process makes its best to be clean, the dedicated build user **HAS NO NETWORK ACCESS* and won't run those git commands. There are no reasons a build system has to run git to download sources in the middle of the build.
I do understand that the three items above exist because it is easy for developers. But if you write software and publish it, that would be very kind of you to think how it works on non-linux systems. Don't hesitate to ask on social medias if someone is willing to build your software on a different platform than yours if you want to improve support. We do love BSD friendly developers who won't reject OpenBSD specifics patches.
What I would like to see improved
This is my own opinion and doesn't represent the OpenBSD team members opinions. There are some things I wish OpenBSD could improve there.
- Better ARM support
- Wifi speed
- Better performance (gently improving every release)
- FFS improvements in regards to reliability (I often get files in lost+found)
- Faster pkg_add -u
- hardware video decoding/encoding support
- better FUSE support and mount cifs/smb support
- scaling up the contributions (more contributors and reviewers for ports@)
I am aware of all the work required here, and I'm certainly not the person who will improve those. This is not a complain but wishes.
Unfortunately, everyone knows OpenBSD features come from hard work and not from wishes submitted to the developers :)
When you think how little the team is in comparison to the other majors OS, I really think a good and efficient job is done there.
Third article of the offline laptop serie.
Sometimes, network access is required
Having a totally disconnected system isn’t really practical for a few
reasons. Sometimes, I really need to connect the offline laptop to the
network. I do produce some content on the computer, so I need to do
backups. The easiest way for me to have reliable backup is to host
them on a remote server holding the data, this requires network
connection for the time of the backup. Of course, backups could be
done on external disks or usb memory sticks (I don’t need to backup
much), but I never liked this backup solution; don’t get me wrong, I
don’t say it’s ineffective, but it doesn’t suit my needs.
Besides the backup, I may need to sync files like my music files. I
may have bought new music that I want to get on the offline laptop, so
network access is required.
I also require internet access to install new packages or upgrade the
system, this isn’t a regular need but I occasionnaly require a new
program I forgot to install. This could be solved by downloaded the
whole packages repository but this would require too many disk space
for packages I would never use. This would also waste a lot of network
transfer.
Finally, when I work on my blog, I need to publish the files, I use
rsync to sync the destination directory from my local computer and
this requires access to the Internet through ssh.
A nice place at the right time
The moments I enjoy using this computer the most is by taking the
laptop on a table with nothing around me. I can then focus about what
I am doing. I find comfortable setups being source of distraction, so
a stool and a table are very nice in my opinion.
In addition to have a clean place to use it, I like to dedicate some
time for the use of this computer. I can write texts or some code in a
given time frame.
On a computer with 24/7 power and internet access I always feel
everything is at reach, then I tend to slack with it.
Having a rather limited battery life changes the way I experience the
computer use. It has a finite time, I have N minutes until the
computer has to be charged or shutdown. This produces for me the same
effect than when starting watching a movie, sometimes I pick up a
movie that fits the time I can spend on it.
Knowing I have some time until the computer stops, I know I must keep
focused because time is passing.
Hello,
A few days ago, as someone working remotely since 3 years I published some tips
to help new remote workers to feel more confident into their new workplace: home
I’ve been told I should publish it on my blog so it’s easier to share the
information, so here it is.
dedicate some space to your work area, if you use a laptop try to dedicate a
table corner for it, so you don’t have to remove your “work station” all the
time
keep track of the time, remember to drink and stand up / walk every hour, you
can set an alarm every hour to remember or use software like
http://www.workrave.org/ or https://github.com/hovancik/stretchly which are
very useful. If you are alone at home, you may lose track of time so this is
important.
don’t forget to keep your phone at hand if you use it for communication with
colleagues. Think that they may only know your phone number, so it’s their
only way to reach you
keep some routine for lunch, you should eat correctly and take the time to do
so, avoid eating in front of the computer
don’t work too much after work hours, do like at your workplace, leave work
when you feel it’s time to and shutdown everything related to work, it’s a
common trap to want to do more and keep an eye on mails, don’t fall into it.
depending on your social skills, work field and colleagues, speak with others
(phone, text whatever), it’s important to keep social links.
Here are some others tips from Jason Robinson
after work, distance yourself from the work time by taking a short walk
outside, cooking, doing laundry, or anything that gets you away from the work
area and cuts the flow.
take at least one walk outside if possible during the day time to get fresh air.
get a desk that can be adjusted for both standing and sitting.
I hope those advices will help you going through the crisis, take care of
yourselves.
This is a little story that happened a few days ago, it explains well how I
usually get involved into ports in OpenBSD.
1 - Lurking into ports/graphics/
At first, I was looking in various ports there are in the graphics category,
searching for an image editor that would run correctly on my offline laptop.
Grafx2 is laggy when using the zoom mode and GIMP won’t run, so I just open
ports randomly to read their pkg/DESCR file.
This way, I often find gems I reuse later, sometimes I have less luck and I
only tried 20 ports which are useless to me. It happens I find issues in ports
looking randomly like this…
2 - Find the port « comix »
Then, the second or third port I look at is « comix », here is the DESCR file.
Comix is a user-friendly, customizable image viewer. It is specifically
designed to handle comic books, but also serves as a generic viewer. It
reads images in ZIP, RAR or tar archives (also gzip or bzip2 compressed)
as well as plain image files.
That looked awesome, I have lot of books as PDF I want to read but it’s not
convenient in a “normal” PDF reader, so maybe comix would help!
3 - Using comix
Once comix was compiled (a mix of python and gtk), I start it and I get errors
opening PDFs… I start it again from console, and in the output I get the
explanation that PDF files are not usable in comix.
Then I read about the CBZ or CBT files, they are archives (zip or tar)
containing pictures, definitely not what a PDF is.
4 - mcomix > comix
After a few searches on the Internet, I find that comix last release is from
2009 and it never supported PDF, so nothing wrong here, but I also found comix
had a fork named mcomix.
mcomix forked a long time ago from comix to fix issues and add support for new
features (like PDF support), while last release is from 2016, it works and
still receive commits (last is from late 2019). I’m going for using comix!
5 - Installing mcomix from ports
Best way to install a program on OpenBSD is to make a port, so it’s correctly
packaged, can be deinstalled and submit to ports@ mailing list later.
I did copy comix folder into mcomix, use a brain dead sed command to replace all
occurrence of comix by mcomix, and it mostly worked! I won’t explain little
details, but I got mcomix to work within a few minutes and I was quite happy!
Fun fact is that comix port Makefile was mentioning mcomix as a suggestion
for upgrade.
6 - Enjoying a CBR reader
With mcomix installed, I was able to read some PDF, it was a good experience
and I was pretty happy with it. I’ve spent a few hours reading, a few moments
after mcomix was installed.
7 - mcomix works but not all the time
After reading 2 longs PDFs, I got issues with the third, some pages were not
rendered and not displayed. After digging this issue a bit, I found about
mcomix internals. Reading PDF is done by rendering every page of the PDF using
mutool binary from mupdf software, this is quite CPU intensive, and for
some reason in mcomix the command execution fails while I can do the exact same
command a hundred time with no failure. Worse, the issue is not reproducible in
mcomix, sometimes some pages will fail to be rendered, sometimes not!
8 - Time to debug some python
I really want to read those PDF so I take my favorite editor and start
debugging some python, adding more debug output (mcomix has a -W parameter
to enable debug output, which is very nice), to try to understand why it
fails at getting output of a working command.
Sadly, my python foo is too low and I wasn’t able to pinpoint the issue. I just
found it fail, sometimes, but I wasn’t able to understand why.
9 - mcomix on PowerPC
While mcomix is clunky with PDF, I wanted to check if it was working on
PowerPC, it took some times to get all the dependencies installed on my old
computer but finally I got mcomix displayed on the screen… and dying on PDF
loading! Crash seems related to GTK and I don’t want to touch that, nobody will
want to patch GTK for that anyway so I’ve lost hope there.
10 - Looking for alternative
Once I knew about mcomix, I was able to search the Internet for alternatives of
it and also CBR readers. A program named zathura seems well known here and
we have it in the OpenBSD ports tree.
Weird thing is that it comes with two different PDF plugins, one named
mupdf and the other one poppler. I did try quickly on my amd64 machine
and zathura was working.
11 - Zathura on PowerPC
As Zathura was working nice on my main computer, I installed it on the PowerPC,
first with the poppler plugin, I was able to view PDF, but installing this
plugin did pull so many packages dependencies it was a bit sad. I deinstalled
the poppler PDF plugin and installed mupdf plugin.
I opened a PDF and… error. I tried again but starting zathura from the
terminal, and I got the message that PDF is not a supported format, with a lot
of lines related to mupdf.so file not being usable. The mupdf plugin work on
amd64 but is not usable on powerpc, this is a bug I need to report, I don’t
understand why this issue happens but it’s here.
12 - Back to square one
It seems that reading PDF is a mess, so why couldn’t I convert the PDF to CBT
files and then use any CBT reader out there and not having to deal with that
PDF madness!!
13 - Use big calibre for the job
I have found on the Internet that Calibre is the most used tool to convert a
PDF into CBT files (or into something else but I don’t really care here). I
installed calibre, which is not lightweight, started it and wanted to change
the default library path, the software did hang when it displayed the file
dialog. This won’t stop me, I restart calibre and keep the default path, I
click on « Add a book » and then it hang again on file dialog. I did report
this issue on ports@ mailing list, but it didn’t solve the issue and this mean
calibre is not usable.
14 - Using the command line
After all, CBT files are images in a tar file, it should be easy to reproduce
the mcomix process involving mutool to render pictures and make a tar of that.
IT WORKED.
I found two ways to proceed, one is extremely fast but may not make pages in
the correct order, the second requires CPU time.
Making CBT files - easiest process
The first way is super easy, it requires mutool (from mupdf package) and it
will extract the pictures from the PDF, given it’s not a vector PDF, not sure
what would happen on those. The issue is that in the PDF, the embedded pictures
have a name (which is a number from the few examples I found), and it’s not
necessarily in the correct order. I guess this depend how the PDF is made.
$ mutool extract The_PDF_file.pdf
$ tar cvf The_PDF_file.tar *jpg
That’s all you need to have your CBT file. In my PDF there was jpg files in it,
but it may be png in others, I’m not sure.
Making CBT files - safest process (slow)
The other way of making pictures out of the PDF is the one used in mcomix, call
mutool for rendering each page as a PNG file using width/height/DPI you
want. That’s the tricky part, you may not want to produce pictures with larger
resolution than the original pictures (and mutool won’t automatically help you
for this) because you won’t get any benefit. This is the same for the DPI. I
think this could be done automatically using a correct script checking each PDF
page resolution and using mutool to render the page with the exact same
resolution.
As a rule of thumb, it seems that rendering using the same width as your screen
is enough to produce picture of the correct size. If you use large values, it’s
not really an issue, but it will create bigger files and take more time for
rendering.
$ mutool draw -w 1920 -o page%d.png The_PDF_file.pdf
$ tar cvf The_PDF_file.tar page*.png
You will get PNG files for each page, correctly numbered, with a width of 1920
pixels. Note that instead of tar, you can use zip to create a zip file.
15 - Finally reading books again
After all this LONG process, I was finally able to read my PDF with any CBR
reader out there (even on phone), and once the process is done, it uses no cpu
for viewing files at the opposite of mcomix rendering all the pages when you
open a file.
I have to use zathura on PowerPC, even if I like it less due to the continuous
pages display (can’t be turned off), but mcomix definitely work great when not
dealing with PDF. I’m still unsure it’s worth committing mcomix to the ports
tree if it fails randomly on random pages with PDF.
16 - Being an open source activist is exhausting
All I wanted was to read a PDF book with a warm cup of tea at hand.
It ended into learning new things, debugging code, making ports, submitting
bugs and writing a story about all of this.
Last year I wrote a huge blog post about an offline laptop attempt.
It kinda worked but I wasn’t really happy with the setups, need and goals.
So, it is back and I use it know, and I am very happy with it.
This article explains my experience at solving my needs, I would
appreciate not receiving advice or judgments here.
State of the need
Internet is infinite, my time is not
Having access to the Internet is a gift, I can access anything or anyone. But
this comes with a few drawbacks. I can waste my time on anything, which is not
particularly helpful. There are so many content that I only scratch things,
knowing it will still be there when I need it, and jump to something else. The
amount of data is impressive, one human can’t absorb that much, we have to deal
with it.
I used to spend time of what I had, and now I just spend time on what exist. An
example of this statement is that instead of reading books I own, I’m looking
for which book I may want to read once, meanwhile no book are read.
Network socialization requires time
When I say “network socialization” this is so to avoid the easy “social
network” saying. I do speak with people on IRC (in real time most of the time),
I am helping people on reddit, I am reading and writing mail most of the time
for OpenBSD development.
Don’t get me wrong, I am happy doing this, but I always keep an eye on each,
trying to help people as soon as they ask a question, but this is really time
consuming for me. I spend a lot of time jumping from one thing to another to
keep myself updated on everything, and so I am too distracted to do anything.
In my first attempt of the offline laptop, I wanted to get my mails on it, but
it was too painful to download everything and keep mails in sync. Sending
emails would have required network too, it wouldn’t be an offline laptop
anymore.
IT as a living and as a hobby
On top of this, I am working in IT so I spend my day doing things over the
Internet and after work I spend my time on open source projects. I can not
really disconnect from the Internet for both.
How I solved this
First step was to define « What do I like to do? », and I came with this short
list:
- reading
- listening to music
- playing video games
- writing things
- learning things
One could say I don’t need a computer to read books, but I have lots of ebooks
and PDF about lots of subjects. The key is to load everything you need on the
computer, because it can be tempting to connect the device to the Internet
because you need a bit of this or that.
I use a very old computer with a PowerPC CPU (1.3 GHz single core) with 512MB
of ram. I like that old computer, and slower computer forbid doing multiple
things at the same time and help me staying on focus.
Reading files
For reading, I found zathura or comix (and its fork mcomix) very
useful for reading huge PDF, the scrolling customization make those tools
useful.
Listening to music
I buy my music as FLAC files and download it, this doesn’t require any internet
access except at purchase time, so nothing special there. I use moc player
which is easy to use, have a lot of feature and supports FLAC (on powerpc).
Video games
Emulation is a nice way to play lot of games on OpenBSD, on my old computer
it’s up to game boy advance / super nes / megadrive which should allow me to do
again lots of games I own.
We also have a lot of nice games in ports, but my computer is too slow to run
them or they won’t work on powerpc.
Encyclopedia - Wikipedia
I’ve set up a local wikipedia replica like I explained in a previous article,
so anytime I need to find about something, I can ask my local wikipedia. It’s
always available. This is the best I found for a local encyclopedia, works
well.
Writing things
Since I started the offline computer experience, I started a diary. I never
felt the need to do so but I wanted to give it a try. I have to admit summing up
what I achieved in the day before going to bed is a satisfying experience and
now I continue to update it.
You can use any text editor you want, there are special software with specific
features, like rednotebook or lifeograph which supports embedded pictures or on
the fly markdown rendering. But a text file and your favorite editor also do
the job.
I also write some articles of this blog. It’s easy to do so as articles are
text files in a git repository. When I finish and I need to publish, I get
network and push changes to the connected computer which will do the publishing
job.
Technical details
I will go fast on this. My set up is an old Apple IBook G4 with a
1024x768 screen (I love this 4:3 ratio) running OpenBSD.
The system firewall pf is configured to prevent any incoming
connections, and only allow TCP on the network to port 22, because
when I need to copy files, I use ssh / sftp. The /home partition is
encrypted using the softraid crypto device, full disk encryption is
not supported on powerpc.
The experience is even more enjoyable with a warm cup of tea on hand.
Wikipedia and openzim
If you ever wanted to host your own wikipedia replica, here is the simplest
way.
As wikipedia is REALLY huge, you don’t really want to host a php wikimedia
software and load the huge database, instead, the project made the openzim
format to compress the huge database that wikipedia became while allowing using
it for fast searches.
Sadly, on OpenBSD, we have no software reading zim files and most software
requires the library openzim to work which requires extra work to get it as a
package on OpenBSD.
Hopefully, there is a python package implementing all you need as pure python
to serve zim files over http and it’s easy to install.
This tutorial should work on all others unix like systems but packages or
binary names may change.
Downloading wikipedia
The project Kiwix is responsible for wikipedia files, they create regularly
files from various projects (including stackexchange, gutenberg, wikibooks
etc…) but for this tutorial we want wikipedia:
https://wiki.kiwix.org/wiki/Content_in_all_languages
You will find a lot of files, the language is contained into the filename. Some
filenames will also self explain if they contain everything or categories, and
if they have pictures or not.
The full French file is 31.4 GB worth.
Running the server
For the next steps, I recommend setting up a new user dedicated to this.
On OpenBSD, we will require python3 and pip:
$ doas pkg_add py3-pip--
Then we can use pip to fetch and install dependencies for the zimply software,
the flag --user
is rather important as it allows any user to download and
install python libraries in its home folder instead of polluting the whole
system as root.
$ pip3.7 install --user --upgrade zimply
I wrote a small script to start the server using the zim file as a parameter, I
rarely write python so the script may not be high standard.
File server.py:
from zimply import ZIMServer
import sys
import os.path
if len(sys.argv) == 1:
print("usage: " + sys.argv[0] + " file")
exit(1)
if os.path.exists(sys.argv[1]):
ZIMServer(sys.argv[1])
else:
print("Can't find file " + sys.argv[1])
And then you can start the server using the command:
$ python3.7 server.py /path/to/wikipedia_fr_all_maxi_2019-08.zim
You will be able to access wikipedia on the url http://localhost:9454/
Note that this is not a “wiki” as you can’t see history and edit/create pages.
This kind of backup is used in place like Cuba or Africa areas where people
don’t have unlimited internet access, the project lead by Kiwix allow more
people to access knowledge.
Hello, this is a long time I want to work on a special project using an
offline device and work on it.
I started using computers before my parents had an internet access and
I was enjoying it. Would it still be the case if I was using a laptop
with no internet access?
When I think about an offline laptop, I immediately think I will miss
IRC, mails, file synchronization, Mastodon and remote ssh to my servers.
But do I really need it _all the time_?
As I started thinking about preparing an old laptop for the experiment,
differents ideas with theirs pros and cons came to my mind.
Over the years, I produced digital data and I can not deny this. I
don't need all of them but I still want some (some music, my texts,
some of my programs). How would I synchronize data from the offline
system to my main system (which has replicated backups and such).
At first I was thinking about using a serial line over the two
laptops to synchronize files, but both laptop lacks serial ports and
buying gears for that would cost too much for its purpose.
I ended thinking that using an IP network _is fine_, if I connect for a
specific purpose. This extended a bit further because I also need to
install packages, and using an usb memory stick from another computer
to get packages and allow the offline system to use it is _tedious_
and ineffective (downloading packages and correct dependencies is a
hard task on OpenBSD in the case you only want the files). I also
came across a really specific problem, my offline device is an old
Apple PowerPC laptop being big-endian and amd64 is little-endian, while
this does not seem particularly a problem, OpenBSD filesystem is
dependent of endianness, and I could not share an usb memory device
using FFS because of this, alternatives are fat, ntfs or ext2 so it is a
dead end.
Finally, using the super slow wireless network adapter from that
offline laptop allows me to connect only when I need for a few file
transfers. I am using the system firewall pf to limit access to outside.
In my pf.conf, I only have rules for DNS, NTP servers, my remote server,
OpenBSD mirror for packages and my other laptop on the lan. I only
enable wifi if I need to push an article to my blog or if I need to
pull a bit more music from my laptop.
This is not entirely _offline_ then, because I can get access to the
internet at any time, but it helps me keeping the device offline.
There is no modern web browser on powerpc, I restricted packages to
the minimum.
So far, when using this laptop, there is no other distraction than the
stuff I do myself.
At the time I write this post, I only use xterm and tmux, with moc as a
music player (the audio system of the iBook G4 is surprisingly good!),
writing this text with ed and a 72 long char prompt in order to wrap
words correctly manually (I already talked about that trick!).
As my laptop has a short battery life, roughly two hours, this also
helps having "sessions" of a reasonable duration. (Yes, I can still
plug the laptop somewhere).
I did not use this laptop a lot so far, I only started the experiment
a few days ago, I will write about this sometimes.
I plan to work on my gopher space to add new content only available
there :)