<![CDATA[rolisz's site]]>https://rolisz.ro/https://rolisz.ro/favicon.pngrolisz's sitehttps://rolisz.ro/Ghost 3.5Fri, 27 Mar 2020 20:08:30 GMT60<![CDATA[Quarantine boardgames: Travelin']]>Since Romania instituted a general lockdown this week, I haven't really left the house. Not that I normally leave the house too much, nor that I can really leave the house even if I wanted to, because I managed to sprain my ankle, inside the house.

But after so many

]]>
https://rolisz.ro/2020/03/27/quarantine-boardgames-travelin/5e7cfc89e6cfb84549ab354aFri, 27 Mar 2020 20:01:54 GMT

Since Romania instituted a general lockdown this week, I haven't really left the house. Not that I normally leave the house too much, nor that I can really leave the house even if I wanted to, because I managed to sprain my ankle, inside the house.

But after so many days inside, I had an itch to go travelling. Luckily, I had a boardgame for that, Travelin', which took me all across Europe.

Quarantine boardgames: Travelin'
The Travelin' map

Travelin' is a card game where the objective is to score the most points while traveling to different countries. You have four different kinds of cards: country cards, action cards, event cards and tickets.

Quarantine boardgames: Travelin'
The country cards

Each country is worth a different amount of points. Countries that are further away, such as Iceland or Turkey, are worth more, while more central countries are worth less. Each country also has an associated action.

Quarantine boardgames: Travelin'
The action cards

On each turn you can play an action card. Some give you points, some give you more actions, some make you swap cards with other players and some allow you to steal cards from others.

Quarantine boardgames: Travelin'
Ticket cards

To travel between countries, you need tickets. Buses get you over one border, trains over two borders and with planes you can fly anywhere.

Event cards do things like block someone else's travel.

The game stops when someone travels to 5 countries. The winner is the one who got the most points.

There is a lot of backstabbing going on in the game, with all the card stealing, swapping and travel blocking. You better be good friends with those you play with, or they'll get upset with you.

Travelin' is a game made in Romania, by Eli Lester. The game was made in English originally, but last year it got translated into Romanian. The artwork for the second edition is delightful and the text commentary on each card is brilliantly done, making us laugh many times when we drew cards.

While it's a fun game, very quick to explain, I felt that it's a bit unbalanced. You start with 8 cards and that number can quickly go up, without a limit, which encourages hoarding of cards. Also I felt that the number of tickets in the deck is too low, because sometimes for several rounds nobody could travel and everyone was waiting to get some tickets.

Score: 7

]]>
<![CDATA[Reflections during COVID19 times]]>I was checking up on Twitter and I read a thread about simulations showing that any loosening of the isolation and quarantine measures will inevitably lead to overwhelming of the hospital system and deaths on the order of millions across the world.

Despite the fact that I have been working

]]>
https://rolisz.ro/2020/03/18/reflections-during-covid19-times/5e7284a9e6cfb84549ab3520Wed, 18 Mar 2020 20:37:55 GMT

I was checking up on Twitter and I read a thread about simulations showing that any loosening of the isolation and quarantine measures will inevitably lead to overwhelming of the hospital system and deaths on the order of millions across the world.

Despite the fact that I have been working from home for the last two years, the prospect of isolation for the foreseeable future hit me quite hard. I am blessed to live in a house, so I can at least go outside in the yard. More than that, I live at the edge of the city, so I can actually go for a walk safely, without meeting anyone else. I can go for a run. I have a gym at home. I have lots of board games. I have a job I can do from home. But still, one year of perhaps not seeing my parents? One year of not meeting with other friends, except for the rarest of occasions? Maybe one year of not breaking the bread physically in fellowship with other saints? My house construction would surely get delayed. Worries about my job also came to mind. My wife works as a pharmacist. She is definitely among the more exposed people. What if…? What if…?

I’ve struggled to focus at work. Also on Twitter, a friend was “encouraging” someone else, saying that they’ve also just had some days filled with anxiety and that it’s normal with everything that’s going on. That… didn’t help with my anxiety, to say the least.

But then I remembered something good: someone had posted on Facebook something about a Psalm being the antidote to our current situation. I didn’t remember exactly which Psalm, but I had an inkling it was Psalm 91, so I decided to read it.

He who dwells in the shelter of the Most High
    will abide in the shadow of the Almighty.
I will say to the LORD, “My refuge and my fortress,
    my God, in whom I trust.”

For he will deliver you from the snare of the fowler
    and from the deadly pestilence.
He will cover you with his pinions,
    and under his wings you will find refuge;
    his faithfulness is a shield and buckler.
You will not fear the terror of the night,
    nor the arrow that flies by day,
nor the pestilence that stalks in darkness,
    nor the destruction that wastes at noonday.

A thousand may fall at your side,
    ten thousand at your right hand,
    but it will not come near you.
You will only look with your eyes
    and see the recompense of the wicked.
    
Because you have made the LORD your dwelling place—
    the Most High, who is my refuge—
no evil shall be allowed to befall you,
    no plague come near your tent.

For he will command his angels concerning you
    to guard you in all your ways.
On their hands they will bear you up,
    lest you strike your foot against a stone.
You will tread on the lion and the adder;
    the young lion and the serpent you will trample underfoot.
    
“Because he holds fast to me in love, I will deliver him;
    I will protect him, because he knows my name.
When he calls to me, I will answer him;
    I will be with him in trouble;
    I will rescue him and honor him.
With long life I will satisfy him
    and show him my salvation.”

Psalm 91 - ESV

My inkling was correct. Psalm 91 fit the current situation well. Now, normally, I don’t like just taking random verses from the Bible and saying they apply to me. Most of the time, that means taking things out of context.

However, this Psalm is definitely at least meant for the Christ. The tempter uses it against him during the second temptation, when he quotes “For he will command his angels concerning you to guard you in all your ways. On their hands they will bear you up, lest you strike your foot against a stone.” And, because it’s a promise applied to Christ, we are also beneficiaries of it, if we are in Christ, as Paul so beautifully argues this in Galatians 3:29: “And if you are Christ's, then you are Abraham's offspring, heirs according to promise.” There he’s talking about the promises made to Abraham, but to quote Paul again, from Romans 8:32, “He who did not spare his own Son but gave him up for us all (that’s the promise made to Abraham), how will he not also with him graciously give us all things?”.

So, as long as I hold fast to him in love, He will deliver me. He will protect me, he will answer me when I call to him and he will be with me in trouble.

If the predictions in that Twitter thread are true, then there will be lots of trouble in the coming months, but I can rest assured that my God will be with me.

]]>
<![CDATA[Boardgames Party: Hanabi]]>This time I'm going to present a Hanabi, a cooperative card game. It's super small, can easily be taken on trips to be played between 2-4 players.

Each player is a fireworks maker in China, who have messed up the fireworks, right before the show is supposed to start. The

]]>
https://rolisz.ro/2020/03/08/boardgames-party-ha/5e653c8ce6cfb84549ab34d1Sun, 08 Mar 2020 21:44:46 GMT

This time I'm going to present a Hanabi, a cooperative card game. It's super small, can easily be taken on trips to be played between 2-4 players.

Each player is a fireworks maker in China, who have messed up the fireworks, right before the show is supposed to start. The twist is that nobody can see their own cards, but they can see what everyone else has.

Boardgames Party: Hanabi
Time chips

On your turn, you can share information, discard cards or play cards. When giving information to the other players, you can say either what numbers their cards have or what colours the cards are. But you can only share information when there is still "time" left, which can be bought by discarding cards. Because you can't see your own cards, you can easily discard good cards that you still need to complete your fireworks. Most cards come in duplicates, but the final ones (highest scoring ones) are available only in one of each color.

Boardgames Party: Hanabi
The fuse

The goal is to create 5 fireworks by placing cards of the same colour in order. If you place them in the wrong order, the fuse for the fireworks gets shorter.

Scoring is done by summing up the highest card for each color. On our first game, we did pretty bad (15 points) and on our second one we got to 18 points, out of a maximum of 25. We'll have to play some more time until we get to that sweet 25!

Boardgames Party: Hanabi
Example fireworks

The game is quite short, one round lasting less than 30 minutes. The rules are also quite short, less than 1 page A4, so it's also easy to get started.

Hanabi was a quite fun game. Score: 8

]]>
<![CDATA[Web crawler in Rust]]>I have heard many good things about Rust for several years now. A couple of months ago, I finally decided to start learning Rust. I skimmed through the Book and did the exercises from rustlings. While they helped me get started, I learn best by doing some projects. So I

]]>
https://rolisz.ro/2020/03/01/web-crawler-in-rust/5e51262ae6cfb84549ab3291Sun, 01 Mar 2020 16:56:35 GMT

I have heard many good things about Rust for several years now. A couple of months ago, I finally decided to start learning Rust. I skimmed through the Book and did the exercises from rustlings. While they helped me get started, I learn best by doing some projects. So I decided to replace the crawler that I used for my Ghost blog, which had been written in bash with wget, with something written in Rust.

And I was pleasantly surprised. I am by no means very knowledgeable in Rust, I still have to look up most of the operations on the Option and Result types, I have to DuckDuckGo how to make HTTP requests, read and write files and so on, but I was still able to write a minimal crawler in about 2-3 hours and then in about 10 hours of total work  I had something that was both faster and had fewer bugs than the wget script.

So let's start writing a simple crawler that downloads all the HTML pages from a blog.

Initializing a Rust project

After installing Rust, let's create a project somewhere:

 > cargo new rust_crawler

This initializes a Hello World program, which we can verify that it runs using:

> cargo run
   Compiling rust_crawler v0.1.0 (D:\Programming\rust_crawler)
    Finished dev [unoptimized + debuginfo] target(s) in 9.31s
     Running `target\debug\rust_crawler.exe`
Hello, world!

Making HTTP requests

Let's make our first HTTP request. For this, we will use the reqwest library. It has both blocking and asynchronous APIs for making HTTP calls. We'll start off with the blocking API, because it's easier.

use std::io::Read;

fn main() {
    let client = reqwest::blocking::Client::new();
    let origin_url = "https://rolisz.ro/";
    let mut res = client.get(origin_url).send().unwrap();
    println!("Status for {}: {}", origin_url, res.status());

    let mut body  = String::new();
    res.read_to_string(&mut body).unwrap();
    println!("HTML: {}", &body[0..40]);
}
> cargo run
   Compiling rust_crawler v0.1.0 (D:\Programming\rust_crawler)
    Finished dev [unoptimized + debuginfo] target(s) in 2.30s
     Running `target\debug\rust_crawler.exe`
Status: 200 OK https://rolisz.ro/
HTML <!DOCTYPE html>
<html lang="en">
<head>

We create a new reqwest blocking client, create a GET request and we send it. The send call normally returns a Result, which we just unwrap for now. We print out the status code, to make sure the request returned ok and then we copy the content of the request into a mutable variable and we print it out. So far so good.

Now let's parse the HTML and extract all the links we find. For this we will use the select crate, which can parse HTML and allows us to search through the nodes.

use std::io::Read;
use select::document::Document;
use select::predicate::Name;

fn main() {
    let client = reqwest::blocking::Client::new();
    let origin_url = "https://rolisz.ro/";
    let mut res = client.get(origin_url).send().unwrap();
    println!("Status for {}: {}", origin_url, res.status());

    let mut body  = String::new();
    res.read_to_string(&mut body).unwrap();
   
    Document::from(body.as_str())
        .find(Name("a"))
        .filter_map(|n| n.attr("href"))
        .for_each(|x| println!("{}", x));
}
> cargo run --color=always --package rust_crawler --bin rust_crawler
   Compiling rust_crawler v0.1.0 (D:\Programming\rust_crawler)
    Finished dev [unoptimized + debuginfo] target(s) in 2.65s
     Running `target\debug\rust_crawler.exe`
Status for https://rolisz.ro/: 200 OK
https://rolisz.ro
https://rolisz.ro
https://rolisz.ro/projects/
https://rolisz.ro/about-me/
https://rolisz.ro/uses/
https://rolisz.ro/tag/trips/
https://rolisz.ro/tag/reviews/
#subscribe
/2020/02/13/lost-in-space/
/2020/02/13/lost-in-space/
/author/rolisz/
/author/rolisz/
...
/2020/02/07/interview-about-wfh/
/2020/02/07/interview-about-wfh/
/2019/01/30/nas-outage-1/
/2019/01/30/nas-outage-1/
/author/rolisz/
/author/rolisz/
https://rolisz.ro
https://rolisz.ro
https://www.facebook.com/rolisz
https://twitter.com/rolisz
https://ghost.org
javascript:;
#

We search for all the anchor tags, filter only those that have a valid href attribute and we print the value of those attributes.

We see all the links in the output, but there are some issues. First, some of the links are absolute, some are relative, and some are pseudo-links used for doing Javascript things. Second, the links that point towards posts are duplicated and third, there are links that don't point towards something on my blog.

The duplicate problem is easy to fix: we put everything into a HashSet and then we'll get only a unique collection of URLs.

use std::io::Read;
use select::document::Document;
use select::predicate::Name;
use std::collections::HashSet;

fn main() {
    let client = reqwest::blocking::Client::new();
    let origin_url = "https://rolisz.ro/";
    let mut res = client.get(origin_url).send().unwrap();
    println!("Status for {}: {}", origin_url, res.status());

    let mut body  = String::new();
    res.read_to_string(&mut body).unwrap();

    let found_urls = Document::from(body.as_str())
        .find(Name("a"))
        .filter_map(|n| n.attr("href"))
        .map(str::to_string)
        .collect::<HashSet<String>>();
    println!("URLs: {:#?}", found_urls)
}

First we have to convert the URLs from str type to String, so we get objects that have a separate lifetime from the original string which contains the whole HTML. Then we insert all the strings into a hash set, using the collect function from Rust, which handles insertion into all kinds of containers, in all kinds of situations.

To solve the other two problems we have to parse the URLs, using methods provided by reqwest.

use std::io::Read;
use select::document::Document;
use select::predicate::Name;
use std::collections::HashSet;
use reqwest::Url;

fn get_links_from_html(html: &str) -> HashSet<String> {
    Document::from(html.as_str())
        .find(Name("a").or(Name("link")))
        .filter_map(|n| n.attr("href"))
        .filter_map(normalize_url)
        .collect::<HashSet<String>>()
}

fn normalize_url(url: &str) -> Option<String> {
    let new_url = Url::parse(url);
    match new_url {
        Ok(new_url) => {
            if new_url.has_host() && new_url.host_str().unwrap() == "ghost.rolisz.ro" {
                Some(url.to_string())
            } else {
                None
            }
        },
        Err(_e) => {
            // Relative urls are not parsed by Reqwest
            if url.starts_with('/') {
                Some(format!("https://rolisz.ro{}", url))
            } else {
                None
            }
        }
    }
}

fn main() {
    let client = reqwest::blocking::Client::new();
    let origin_url = "https://rolisz.ro/";
    let mut res = client.get(origin_url).send().unwrap();
    println!("Status for {}: {}", origin_url, res.status());

    let mut body = String::new();
    res.read_to_string(&mut body).unwrap();

    let found_urls = get_links_from_html(&body);
    println!("URLs: {:#?}", found_urls)
}

We moved all the logic to a function get_links_from_html. We apply another filter_map to the links we find, in which we check if we can parse the URL. If we can, we check if there is a host and if it's equal to my blog. Otherwise, if we can't parse, we check if it starts with a /, in which case it's a relative URL. All other cases lead to rejection of the URL.

Now it's time to start going over these links that we get so that we crawl the whole blog. We'll do a breadth first traversal and we'll have to keep track of the visited URLs.

fn fetch_url(client: &reqwest::blocking::Client, url: &str) -> String {
    let mut res = client.get(url).send().unwrap();
    println!("Status for {}: {}", url, res.status());

    let mut body  = String::new();
    res.read_to_string(&mut body).unwrap();
    body
}

fn main() {
    let now = Instant::now();

    let client = reqwest::blocking::Client::new();
    let origin_url = "https://rolisz.ro/";

    let body = fetch_url(&client, origin_url);

    let mut visited = HashSet::new();
    visited.insert(origin_url.to_string());
    let found_urls = get_links_from_html(&body);
    let mut new_urls = found_urls
    	.difference(&visited)
        .map(|x| x.to_string())
        .collect::<HashSet<String>>();

    while !new_urls.is_empty() {
        let mut found_urls: HashSet<String> = new_urls.iter().map(|url| {
            let body = fetch_url(&client, url);
            let links = get_links_from_html(&body);
            println!("Visited: {} found {} links", url, links.len());
            links
        }).fold(HashSet::new(), |mut acc, x| {
                acc.extend(x);
                acc
        })
        visited.extend(new_urls);
        
        new_urls = found_urls
        	.difference(&visited)
            .map(|x| x.to_string())
            .collect::<HashSet<String>>();
        println!("New urls: {}", new_urls.len())
    }
    println!("URLs: {:#?}", found_urls);
    println!("{}", now.elapsed().as_secs());

}

First, we moved the code to fetch a URL to its own function, because we will be using it in two places.

Then the idea is that we have a HashSet containing all the pages we have visited so far. When we visit a new page, we find all the links in that page and we subtract from them all the links that we have previously visited. These will be new links that we will have to visit. We repeat this as long as we have new links to visit.

So we run this and we get the following output:

Status for https://rolisz.ro/favicon.ico: 200 OK
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: InvalidData, error: "stream did not contain valid UTF-8" }', src\libcore\result.rs:1165:5
stack backtrace:
   0: core::fmt::write
             at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14\/src\libcore\fmt\mod.rs:1028
   1: std::io::Write::write_fmt<std::sys::windows::stdio::Stderr>
             at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14\/src\libstd\io\mod.rs:1412
   2: std::sys_common::backtrace::_print
             at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14\/src\libstd\sys_common\backtrace.rs:65
   3: std::sys_common::backtrace::print
             at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14\/src\libstd\sys_common\backtrace.rs:50
...

The problem is that our crawler tries to download, as text, pictures and other binaries. The Rust String has to be valid UTF-8, so when it tries to put there all kinds of bytes, we will have some that lead to invalid UTF-8 so we get a panic. We could solve this in two different ways: either download URLs as bytes and then convert to strings only those that we know are HTML, or we can skip the ones that are not HTML. Because I am interested in only the textual content of my blog, I will implement the latter solution.

fn has_extension(url: &&str) -> bool {
    Path::new(url).extension().is_none()
}

fn get_links_from_html(html: &str) -> HashSet<String> {
    Document::from(html.as_str())
        .find(Name("a").or(Name("link")))
        .filter_map(|n| n.attr("href"))
        .filter(has_extension)
        .filter_map(normalize_url)
        .collect::<HashSet<String>>()
}

To determine if it's an HTML, we look if there is an extension or not and we that as a filter to our function which retrieves link from the HTML.

Writing the HTML to disk

We are now getting all the HTML we want, time to start writing it to disk.

fn write_file(path: &str, content: &str) {
    fs::create_dir_all(format!("static{}", path)).unwrap();
    fs::write(format!("static{}/index.html", path), content);
}

fn main() {
    let now = Instant::now();

    let client = reqwest::blocking::Client::new();
    let origin_url = "https://rolisz.ro/";

    let body= fetch_url(&client, origin_url);

    write_file("", &body);
    let mut visited = HashSet::new();
    visited.insert(origin_url.to_string());
    let found_urls = get_links_from_html(&body);
    let mut new_urls = found_urls
    	.difference(&visited)
        .map(|x| x.to_string())
        .collect::<HashSet<String>>();

    while new_urls.len() > 0 {
        let mut found_urls: HashSet<String> = new_urls
        	.iter()
            .map(|url| {
                let body = fetch_url(&client, url);
                write_file(&url[origin_url.len() - 1..], &body);
                let links = get_links_from_html(&body);
                println!("Visited: {} found {} links", url, links.len());
                links
        })
        .fold(HashSet::new(), |mut acc, x| {
                acc.extend(x);
                acc
        })
        visited.extend(new_urls);
        new_urls = found_urls
            .difference(&visited)
            .map(|x| x.to_string())
            .collect::<HashSet<String>>();
        println!("New urls: {}", new_urls.len())
    }
    println!("URLs: {:#?}", found_urls);
    println!("{}", now.elapsed().as_secs());

}

We use the create_dir_all function, which works like mkdir -p in Linux to create the nested folder structure. We write the HTML page to the index.html file in the same folder structure as the URL structure. Most web servers will then serve the index.html file when going to the URL, so the output in the browser will be the same as the one from Ghost serving dynamic pages.

Speeding it up

Letting this run on my blog takes about 110 seconds. Let's see if we can speed it up by downloading the pages in parallel.

fn main() {
    let now = Instant::now();

    let client = reqwest::blocking::Client::new();
    let origin_url = "https://rolisz.ro/";

    let body = fetch_url(&client, origin_url);

    write_file("", &body);
    let mut visited = HashSet::new();
    visited.insert(origin_url.to_string());
    let found_urls = get_links_from_html(&body);
    let mut new_urls = found_urls
        .difference(&visited)
        .map(|x| x.to_string())
        .collect::<HashSet<String>>();

    while !new_urls.is_empty() {
        let found_urls: HashSet<String> = new_urls
            .par_iter()
            .map(|url| {
                let body = fetch_url(&client, url);
                write_file(&url[origin_url.len() - 1..], &body);

                let links = get_links_from_html(&body);
                println!("Visited: {} found {} links", url, links.len());
                links
            })
            .reduce(HashSet::new, |mut acc, x| {
                acc.extend(x);
                acc
            });
        visited.extend(new_urls);
        new_urls = found_urls
            .difference(&visited)
            .map(|x| x.to_string())
            .collect::<HashSet<String>>();
        println!("New urls: {}", new_urls.len())
    }
    println!("URLs: {:#?}", found_urls);
    println!("{}", now.elapsed().as_secs());
}

In Rust there is this awesome library called Rayon which provides a very simple primitive for running functions in parallel: par_iter, which is short for parallel iterator. It's an almost drop-in replacement for iter, which is part of the standard library for collections, and it runs the provided closure in parallel, taking care of boring stuff like thread scheduling. Besides changing iter to par_iter, we have to change the fold to reduce and provide a closure that returns the "zero" element, so it can generate multiple of them.

This reduces the running time to 70 seconds, down from 110 seconds.

Proper error handling

One more thing to fix in our program: error handling. Rust helps us a lot with error handling with it's builtin Option and Result types, but so far we've been ignoring them, liberally sprinkling unwrap everywhere. unwrap returns the inner type or panics if there is an error (for Result) or None value (for Option). To handle these correctly, we should create our own error type.

One appearance of unwrap that we can get rid of easily is in the normalize_url function. In the if we have new_url.has_host() && new_url.host_str().unwrap() == "ghost.rolisz.ro" This can't possibly panic, because we do a check first that the host string exists, but there is a nicer Rust way to express this:

if let Some("ghost.rolisz.ro") = new_url.host_str() {
	Some(url.to_string())
}

To my Rust newbie eyes, it looks really weird at a first glance, but it does make sense eventually.

For the other cases we need to define our own Error type, which will be a wrapper around the other types, providing a uniform interface to all of them:

#[derive(Debug)]
enum Error {
    Write { url: String, e: IoErr },
    Fetch { url: String, e: reqwest::Error },
}

type Result<T> = std::result::Result<T, Error>;

impl<S: AsRef<str>> From<(S, IoErr)> for Error {
    fn from((url, e): (S, IoErr)) -> Self {
        Error::Write {
            url: url.as_ref().to_string(),
            e,
        }
    }
}

impl<S: AsRef<str>> From<(S, reqwest::Error)> for Error {
    fn from((url, e): (S, reqwest::Error)) -> Self {
        Error::Fetch {
            url: url.as_ref().to_string(),
            e,
        }
    }
}

We have two kinds of errors in our crawler: IoErr and reqwest::Error. The first is returned when trying to write a file, the second when we try to fetch a URL. Besides the original error, we add some context, such as the URL or path that was accessed when we got the error. We provide implementation to convert from each library error to our own error type and we also define a Result helper type so that we don't always have to type out our Error type.

fn fetch_url(client: &reqwest::blocking::Client, url: &str) -> Result<String> {
    let mut res = client.get(url).send().map_err(|e| (url, e))?;
    println!("Status for {}: {}", url, res.status());

    let mut body = String::new();
    res.read_to_string(&mut body).map_err(|e| (url, e))?;
    Ok(body)
}

fn write_file(path: &str, content: &str) -> Result<()> {
    let dir = format!("static{}", path);
    fs::create_dir_all(format!("static{}", path)).map_err(|e| (&dir, e))?;
    let index = format!("static{}/index.html", path);
    fs::write(&index, content).map_err(|e| (&index, e))?;

    Ok(())
}

Our two functions that can produce errors now return a Result type. All the operations that can return an error have a map_err applied to the result, and we generate our own Error from the existing error.

let (found_urls, errors): (Vec<Result<HashSet<String>>>, Vec<_>) = new_urls
      .par_iter()
      .map(|url| -> Result<HashSet<String>> {
            let body = fetch_url(&client, url)?;
            write_file(&url[origin_url.len() - 1..], &body)?;

            let links = get_links_from_html(&body);
            println!("Visited: {} found {} links", url, links.len());
            Ok(links)
       })
       .partition(Result::is_ok);

Our main loop to download new URLs changes a bit. Our closure now returns either a set of URLs or an error. To separate the two kinds of results, we partition the iterator based on Result::is_ok, resulting in the vectors, one with HashSets, one with Errors, but both still wrapped in Results.

visited.extend(new_urls);
new_urls = found_urls
    .into_par_iter()
    .map(Result::unwrap)
    .reduce(HashSet::new, |mut acc, x| {
        acc.extend(x);
        acc
    })
    .difference(&visited)
    .map(|x| x.to_string())
    .collect::<HashSet<String>>();
println!("New urls: {}", new_urls.len());

We handle each vector separately. For the success one we have to unwrap and the merge all the HashSets into one.

println!(
   "Errors: {:#?}",
    errors
        .into_iter()
        .map(Result::unwrap_err)
        .collect::<Vec<Error>>()
)

For the Vec containing the Errors, we have to unwrap the errors and then we just  print them out.

And with that we have a small and simple web crawler, which runs fairly fast and which handles most (all?) errors correctly. The final version of the code can be found here.

Special thanks to Cedric Hutchings and lights0123 who reviewed my code on Code Review.

]]>
<![CDATA[Lost in Space]]>As I have mentioned previously, I like sci-fi. At least I used to. I'm not sure if my tastes have changed or if they don't make them like they used to. But Lost in Space is a sci-fi family show that I really enjoy, together with my wife.

It's actually

]]>
https://rolisz.ro/2020/02/13/lost-in-space/5e45155ce6cfb84549ab31e7Thu, 13 Feb 2020 20:57:00 GMT

As I have mentioned previously, I like sci-fi. At least I used to. I'm not sure if my tastes have changed or if they don't make them like they used to. But Lost in Space is a sci-fi family show that I really enjoy, together with my wife.

It's actually one of the quite rare sci-fi TV shows that are for the whole family. It's PG13, no swear words, no nudity and very little violence. There is scifi in the show, but the accent falls heavily on the Robinson family and how they stick together and cope with various difficulties.

As the title suggests, the Robinsons get lost in space. They were traveling on a space ship from Earth to Alpha Centauri, because Earth is slowly becoming unihabitable. Alpha Centauri is supposed to be the new home for humanity and the ship called The Resolute is taking groups of colonists there. These colonists have to pass various exams to prove they can survive and contribute to the Alpha Centauri colony. Each colonist family has a small ship, called a Jupyter, with which they can make land on planets.

Lost in Space
The five Robinsons

The Robinson family consists of 5 members. John and Maureen are the parents. John was in the army, so he is the brawn of the family, while Maureen is a scientist, so she is the brains. Judy, the oldest child, who is adopted, is studying to be a doctor and is a great athlete. Penny, the middle child, is the wild and creative type, while Will is the young prodigy, who is particularly interested in geology, but in general just knows a lot of stuff, but also can't handle pressure too well.

On the way to Alpha Centauri, the Resolute is attacked by something and all the colonists evacuate. A wormhole appears and the ships are transported somewhere very far away. The Robinsons crash land on a glacier on an unknown planet. To save Judy, John and Will go to gather magnesium to melt the ice which traps her. Will gets lost and is surrounded by wildfire, when he sees an alien ship and a dismembered robot. He puts the robot back together and as gratitude, the robot saves Will and then helps get Judy out of the ice.

Maureen realizes that she doesn't recognize any of the stars around them, so they are lost. Eventually they meet with other colonists whose ships have crashed on the same planet and they do their best to get in contact with the Resolute. However, they quickly find out that the planet occasionally gets a deadly burst of radiation from the nearby black hole, so they have to get away from there.

Lost in Space
Will teaching the Robot

Will and the robot form a special connection, with the robot following Will everywhere and kinda listening to his orders. Their dynamic is really cute, with Will trying to teach Robot various things, such as playing baseball or helping them repair things.

Lost in Space
Dr. Smith

But of course, there has to be a bad gal: Dr. Smith. The most annoying character in any TV show I've seen. She is a con woman, who is always lying and trying to manipulate others for her own benefit. She was almost jailed when the attack on the Resolute happened, but then she changed her identity and started a second life. The character is very good at gaslighting other people and is constantly sowing mistrust between the others. It's a really well done annoying character and after 2-3 episodes you wish somebody would just shoot her. Eventually, in season 2 she starts to become more helpful, but still in a selfish way.

The first season ends with them entering another black hole and getting transported to yet another solar system. But in the second season we learn a lot more about the robots and about how the Resolute was built, as the 24th colonist group is still trying to make it's way to Alpha Centauri, and they seem to have an idea on how to get there. Of course, nothing goes well when the Robinsons are around and they constantly have to fix problems and improvise plans.

I really like this show and I'm glad that there are family friendly sci-fi shows out there.

Grade: 8

]]>
<![CDATA[Interview about working from home]]>This morning my friend David Achim, who's responsible for the Oradea Tech Hub, called me and asked me if I wanted to be interviewed about what's it like to work from home. Three hours later a reporter from ProTV was in my home (after I cleaned up and organized my

]]>
https://rolisz.ro/2020/02/07/interview-about-wfh/5e3dcd2ce6cfb84549ab3197Fri, 07 Feb 2020 20:10:00 GMT

This morning my friend David Achim, who's responsible for the Oradea Tech Hub, called me and asked me if I wanted to be interviewed about what's it like to work from home. Three hours later a reporter from ProTV was in my home (after I cleaned up and organized my home office) and sometime this evening I had my 15 seconds of fame.

tl;dr of my opinion of working from home: I love it, I couldn't imagine going to an office every day, but there are days when I don't leave the house at all and that's not good.

Source: ProTV

]]>
<![CDATA[Boardgames Miniparty: 7 Wonders Duel]]>Last night, my wife and I wanted to play a boardgame by ourselves. Looking through my collection, we decided to give 7 Wonders Duel a try. The original 7 Wonders game is played between 3 and 7 players. The Duel version is made for two player game, with several nice

]]>
https://rolisz.ro/2020/01/26/7-wonders-duel/5e2e09bab362a165945d5518Sun, 26 Jan 2020 22:26:34 GMT

Last night, my wife and I wanted to play a boardgame by ourselves. Looking through my collection, we decided to give 7 Wonders Duel a try. The original 7 Wonders game is played between 3 and 7 players. The Duel version is made for two player game, with several nice twists to still keep some surprises in the game. Technically, the original can also be played in two players, but you have to create a fake player, and it's really weird.

Boardgames Miniparty: 7 Wonders Duel
Military and scientific progress tracking

Most of the game mechanics are similar to the original game. Just like there, despite the fact that the game is named 7 Wonders, the goal of the game is not to build one of the 7 wonders of the ancient world, but to develop your civilization over 3 eras and get the most points.

One of the twists is that you can win before reaching the end of three eras. You can do so either by conquering the other player's city (moving the red peg pictured above to the opposite side), or by collecting enough scientific symbols. In the original, military battles were pairwise between players and just lead to gaining points, and scientific cards were really hard to properly explain. Now it's much simpler.

Boardgames Miniparty: 7 Wonders Duel
4 of the Wonders you can build

In the original, every player had 1 wonder they could build in stages. Now every player has 4 different wonders, which each give different benefits. But some have a really cool power of giving you an extra turn and some let you destroy card belonging to your opponent.

Boardgames Miniparty: 7 Wonders Duel
Cards that you can build

Using the card system has not changed much. There is still a level up system and most buildings still cost resources. Resources are produced by some cards. When you don't have enough resources to build something, you can still buy resources, but now they are more expensive (2 gold + however many of those resources your opponent produces). Some buildings give you victory points, which will get you the classical, civilian victory. But what has changed is how you get cards.

Boardgames Miniparty: 7 Wonders Duel
Card placement 

Cards are placed in a hierarchical structure as pictured above. You can only pick up visible cards on your turn and every alternate layer is hidden from you. So you can't really plan ahead, because you still get surprise cards from time to time. And in some cases you can really force the other players hand, especially in the 3rd era, by leaving them only one card that they can pick up.

I felt that this game is simpler than 7 Wonders, but it still has 20 pages in the rulebook. But, unpacking the game, reading the rules for the first time and doing the scoring, it took us less than an hour and half. And it was a quite fun 1.5 hours with my wife :*

Score: 8

]]>
<![CDATA[Blogs are best served static]]>Or there and back again.

Earlier this month I moved to Ghost. I did that because I wanted to have a nice editor, I wanted to be able to write easily from anywhere and I wanted to get spend less time in the terminal. But I knew moving to a

]]>
https://rolisz.ro/2020/01/21/blogs-are-best-served-static/5e2765e5b362a165945d5440Tue, 21 Jan 2020 21:26:05 GMT

Or there and back again.

Earlier this month I moved to Ghost. I did that because I wanted to have a nice editor, I wanted to be able to write easily from anywhere and I wanted to get spend less time in the terminal. But I knew moving to a dynamic site would have performance penalties.

When looking at the loading time of a single page in the browser's inspector tools, everything seemed fine: load times around 1.3s, seemingly even faster than my old site. But then, I did some load tests using loader.io to see how my new blog performs. The results were not pretty: my blog fell over with 2-3 concurrent requests sustained for 10 seconds. Ghost couldn't spawn more threads, segfaulted and then restarted after 20 seconds.

I am using the smallest DigitalOcean instance, with 1 virtual CPU and 1 GB of RAM. I temporarily resized the instance to have 2 vCPU and then 3 vCPUs, but the results were still pretty poor: even the 3 vCPU instance couldn't handle more then 10 connections per second for 10 seconds.

While this would not be a problem with my current audience (around 50-100 page views per day), I have great dreams of my blog making it to the front page of Hacker News and getting ten thousand views in one day. And I would rather not have my site fall over in such cases.

I knew my static blog could easily sustain 1000 simultaneous connections, so I went back and combined the two, to get the best of both worlds: a nice frontend to write posts and preview them, with the speed of a static site.

I looked a bit into using some tools like Gatsby or Eleventy to generate the static site, but they were quite complicated and required maintaining my theme in yet another place. But I found a much simpler solution: wget. Basically I wrote a crawler for my own website, dumped everything to HTML and reuploaded it to my website.

In order to do this, I set up nginx to proxy a subdomain to the Ghost blog. Initially I wanted to set it up as a "folder" under my domain, but Ghost Admin doesn't play nice with folders. I won't link to it here, both because I don't want it widely available and for another reason I'll explain later.

Then I used the following bash script:

#!/bin/bash

# Define urls and https
from_url=https://subdomain.rolisz.ro
to_url=https://rolisz.ro

# Copy blog content
wget --recursive --no-host-directories --directory-prefix=static --timestamping --reject=jpg,png,jpeg,JPG,JPEG --adjust-extension --timeout=30  ${from_url}/

# Copy 404 page
wget --no-host-directories --directory-prefix=static --adjust-extension --timeout=30 --no-parent --content-on-error --timestamping ${from_url}/404.html

# Copy sitemaps
wget --recursive --no-host-directories --directory-prefix=static --adjust-extension --timeout=30 --no-parent ${from_url}/sitemap.xsl
wget --recursive --no-host-directories --directory-prefix=static --adjust-extension --timeout=30 --no-parent ${from_url}/sitemap.xml
wget --recursive --no-host-directories --directory-prefix=static --adjust-extension --timeout=30 --no-parent ${from_url}/sitemap-pages.xml
wget --recursive --no-host-directories --directory-prefix=static --adjust-extension --timeout=30 --no-parent ${from_url}/sitemap-posts.xml
wget --recursive --no-host-directories --directory-prefix=static --adjust-extension --timeout=30 --no-parent ${from_url}/sitemap-authors.xml
wget --recursive --no-host-directories --directory-prefix=static --adjust-extension --timeout=30 --no-parent ${from_url}/sitemap-tags.xml

# Replace subdomain with real domain
LC_ALL=C find ./static -type f -not -wholename *.git* -exec sed -i -e "s,${from_url},${to_url},g" {} +;

I start by crawling the front page of the blog. I exclude images, because they will be on the same server, so if I upload them with Ghost, I can then have nginx serve them from where Ghost uploads them, with the correct URL. The adjust-extension is needed to automatically create .html files, instead of leaving extensionless files around.

Then I crawl the 404 page and the sitemap pages, which are all well defined.

Wget has a convert-links option, but it's not too smart. It fails badly on image srcsets, screwing up extensions. Because of this I didn't use it, instead opting to use good ol' sed to replace all appearances of the subdomain URL to the normal URL. And because I don't want to add a special exception to this post, I can't include my actual subdomain in the text, or it would get converted too.

For now, I run this locally, but I will set up a script that is called on a cronjob or something.

After downloading all the HTML, I upload the content to my blog with rsync:

rsync -pruv static rolisz.ro:/var/www/static/

Downloading takes about 2 minutes, uploading about 5 seconds.

After doing all this, I redid the loader.io tests.

Blogs are best served static
Left: latency, right: number of clients

The static site managed to serve up to 1800 clients per second, averaging 1000/s for a minute. That's good enough for now!

Longer term, I plan to rewrite the crawler, to be smarter/faster. For example, wget doesn't have any parallelism built in. Old posts don't change often, so they could be skipped most of the time. But doing that correctly takes more time, so for now, I'll stick to this simple solution!

]]>
<![CDATA[Moving from Acrylamid to Ghost]]>When I decided to move to Ghost at the beginning of the month, I realized that I needed to act quickly, because I've kept postponing this for years, so either I do it during the winter holidays, or it gets put off for who knows how long. So I set

]]>
https://rolisz.ro/2020/01/18/moving-from-acrylamid-to-ghost/5e1d798d4bdee83266544133Sat, 18 Jan 2020 20:09:48 GMT

When I decided to move to Ghost at the beginning of the month, I realized that I needed to act quickly, because I've kept postponing this for years, so either I do it during the winter holidays, or it gets put off for who knows how long. So I set up the Ghost instance on DigitalOcean. That was a simple process. I also moved manually the last 20 posts and the 10 most viewed posts, so that there would be some content here and then I switched the DNS for rolisz.ro to point to the Ghost instance. Moving the posts manually took me about two days.

But I have started blogging 10 years ago. In the meantime, I have written over 400 posts. Some of them have not aged well and I deleted them. Some were pointing to YouTube videos that no longer exist, some are references to my exams from university and some are posts that I am simply too ashamed that I ever wrote. But that still leaves me with about 350 posts which I wanted to keep.

I didn't want to move another 330 posts by hand, so I wrote a tool to export my data from Acrylamid into JSON and then to import them into Ghost.

Ghost uses MobileDoc to store post content. The recommended way of importing posts from external sources is to use the Ghost Admin API to import HTML and then Ghost will do a best effort conversion to MobileDoc. Unfortunately, they say it's a lossy conversion, so some things might not look the same when Ghost renders an HTML from the MobileDoc.

My posts where in Markdown format. The easiest way to hack an exporter together was to piggyback on top of Acrylamid, by modifying the view that generated the search JSON. That view already exported a JSON, but it was stripped of HTML and it didn't contain some metadata, such as URL. I removed the HTML stripping, enabled all filters, added the needed metadata.  Because I had a custom picture gallery filter, I had to modify it to add <!--kg-card-begin: html--> before the gallery code and <!--kg-card-end: html--> after it. These two comments indicate to the Ghost importer that it should put what's between them in an HTML card.

The importer uses the recommended Admin API for creating the posts. To use the Admin API, you have to create a new custom integration and get the admin API key from there. To upload HTML formatted posts, you have to append ?source=html to the post creation endpoint.

# Split the key into ID and SECRET
id, secret = ADMIN_KEY.split(':')

def write_post(title, post_date, tags, content=None):
    # Prepare header and payload
    iat = int(date.now().timestamp())

    header = {'alg': 'HS256', 'typ': 'JWT', 'kid': id}
    payload = {
        'iat': iat,
        'exp': iat + 5 * 60,
        'aud': '/v3/admin/'
    }

    # Create the token (including decoding secret)
    token = jwt.encode(payload, bytes.fromhex(secret), algorithm='HS256', headers=header)

    # Make an authenticated request to create a post
    url = 'https://rolisz.ro/ghost/api/v3/admin/posts/?source=html'
    headers = {'Authorization': 'Ghost {}'.format(token.decode())}
    body = {'posts': [{'title': title, 'tags': tags, 'published_at': post_date, 'html': content}]}
    r = requests.post(url, json=body, headers=headers)

    return r
Python function to upload a new post to Ghost

Because I had already manually moved some posts (and because I ran the importer script on a subset of all the posts first), I needed to check whether a post already existed, before inserting it, otherwise Ghost would create a duplicate entry for me. To do this, I used the fact that Ghost would create the same slug from titles as did Acrylamid. This actually failed for about 5 posts (for examples one which had apostrophes or accented letters in the title), but I cleaned those up manually.

posts = json.load(open("posts.json"))

for f in search:
    key = "https://rolisz.ro"+f['url']
    resp = requests.get("https://rolisz.ro"+f['url'])
    sleep(0.5)
    d = datetime.datetime.strptime(f["date"], "%Y-%m-%dT%H:%M:%S%z")
    if resp.status_code != 200:
        if "/static/images/" in f['content']:
            f['content'] = f['content'].replace("/static/images/", "/content/images/")
        write_post(f['title'], d.isoformat(timespec='milliseconds'),
                   f['tags'], f['content'])
        sleep(1)
Code to prepare posts for upload

Ghost also expected the post publish date to have timezone information, which my exporter didn't add, so I had to do a small conversion here. I also corrected the paths of the images. Previously they were in a folder called static, while Ghost stores them in content.

Because my Ghost blog is hosted on a 5$ DigitalOcean instance (referral link), it couldn't handle my Python script hammering it with several posts a second, so I had to add some sleeps, after checking the existence of posts and after uploading them.

After uploading all posts like this, I still had to do some manual changes. For example, Ghost has the concept of featured image and I wanted to use it. In general I want my posts going forward to have at least one image, even if it's a random one from Unsplash. In some cases, I could use an existing image from a post as a featured image, in other cases I had to find a new one. Also, code blocks weren't migrated smoothly through the MobileDoc converter, so most of them needed some adjustment.

Going through all my old posts took me a couple of days (much less though than it would have took without the importer) and it was a fun nostalgia trip down what kind of things were on my mind 10 years ago. For example, back then I was very much into customizing my Windows, with all kinds of Visual Styles, desktop gadgets and tools to make you more "productive". I now use only one thing from that list: F.lux. Also, the reviews that I did of books, movies and TV shows were much more bland (at least I hope that I wrote in a more entertaining style).

]]>
<![CDATA[Books of 2019]]>So, I have good news and bad news about my book reading habit in 2019. The good news is that I've read more Christian books than other kinds of books. The bad news is that I've read the least books ever in 2019: six. The interesting fact is that I

]]>
https://rolisz.ro/2020/01/07/books-of-2019/5e14da6a7b3c9815ad194bc5Tue, 07 Jan 2020 21:10:06 GMT

So, I have good news and bad news about my book reading habit in 2019. The good news is that I've read more Christian books than other kinds of books. The bad news is that I've read the least books ever in 2019: six. The interesting fact is that I didn't read a single fiction book.

I think part of the problem was that I had two books that were thicker than average and more difficult to process. Because of this, I couldn't just pick them up to read them before going to bed, I had to be in the right mood for reading them. Until now I had the rule that I'm only reading one book at a time, to make sure that I don't start a thousand books and not finish any one of them. I think I'll have to dispense with this rule in 2020 and start reading several books at the same time, both heavier stuff and lighter stuff.

Books of 2019
Jordan Peterson - Photo by Gage Skidmore

One of these thicker books was 12 Rules for Life by Jordan Peterson. The author is quite (in)famous, many people consider him either the wisest or the worst human being currently living on Earth. I consider him to be both rational and commonsensical, which is quite rare today. Some people can argue rationally, applying sylogisms, modus ponens and other logical methods to derive certain things, which then don't make feel right, while others spew off things that feel right, but are horribly inconsistent.

I feel like in my bubble of young people that I interact with on a regular basis this book would be very necessary. Too often I see some of them have some normal difficulty happen to them (for example: college exam) and they lose it. Some start whining and complaining how their life is the most difficult ever. Some bury their heads into the sands and try to forget about the problem until it's (almost) too late. JBP's rule that you have the responsibility to help yourself or that you should compare yourself only to you in the past, not with others, would improve the life of those people.

Another interesting idea that remained with me is that some concepts are worth keeping in mind, even if they seem outdated, exactly because of their long/diverse life. One such example is from biology: hierarchies exist even in lobsters, so we shouldn't go full communist in trying to smash hierarchies because they are a social construct, because they've been around for far longer than we can imagine and also because they are not only a social construct (as proven by their presence in lobsters). Another example is from old books, such as the Epic of Gilgamesh (the oldest known literature). JBP argues that those books have survived because generation after generation found useful the wisdom encoded into them. Sometimes it takes a great deal of knowledge to extract that wisdom (because of linguistic and cultural barriers for example), but we should still strive to learn from them, because human nature hasn't changed over the last 10 thousand years. A similar idea can be found in the Economics of Good and Evil, which I read the previous year.

One thing where I didn't really like the 12 Rules for Life was where the author tried to provide his explanations for the Bible. In that place he falls very short.

Books of 2019
Stephen Kaung

Another book that stayed with me was "Men After God's Own Heart: Eight Biographies from the Book of Genesis" by Stephen Kaung. The book goes through most of the major characters from Genesis and provides some very deep insight into their relationship with God. For example, it raises a very interesting question: why did Abel become a sheperd, when they didn't eat meat at the time (only after the flood does God give Noah permission to eat animals)? And Stephen Kaung says that it's because Abel had heard from his parents that God had sacrificed an animal when they sinned and he wanted to be ok with God, so he chose for himself an occupation which didn't benefit him materially, but which helped him have a relationship with God.

Books of 2019
John Lennox - Photo by Anna Lutz

The last book I'll mention is "Have no fear" by John Lennox. It's a very short book about being light and salt in this world and about testifying about our faith. Since I personally struggle to share with others about my faith, I hope I will be able to put into practice some of the concrete advice the author gives in this book.

I really hope in 2020 I will be able to read more books!

]]>
<![CDATA[Moving to Ghost]]>After about 5 years of blogging using a static site engine, I decided to move back to a normal blogging platform. I guess there really is nothing new under the sun and everything just keeps moving in cycles. Sure, this time the platform is node.js based, not PHP, but

]]>
https://rolisz.ro/2020/01/05/moving-to-ghost/5e123a94fc44843fc8be4672Sun, 05 Jan 2020 19:54:04 GMT

After about 5 years of blogging using a static site engine, I decided to move back to a normal blogging platform. I guess there really is nothing new under the sun and everything just keeps moving in cycles. Sure, this time the platform is node.js based, not PHP, but those are details.

I've been wanting to move for about two years. First and foremost, because Acrylamid, the SSE I was using, became deprecated, meaning it wouldn't get any more updates. It had some odd bugs here and there, it was stuck on Python 2.7, so it was time to find something new.

It took me two years to move because I have a lot of hand made stuff for Acrylamid. I have more than 300 posts and I use a custom gallery filter, a custom search implementation and I wrote a bit of math and some code in those posts. I kept searching for something that would enable me to move easily, that would have all the features I was looking for and so on.

Initially I was looking for another SSE, something like Zola, which is written in Rust. But because I have lots of custom stuff (and because Acrylamid has a weird frontmatter syntax), moving to another SSE is actually kinda difficult.

Another problem I had was that writing posts when not at home was always a bit clunky. Sure, I have a Docker file for running Acrylamid, but there were always issues, such as image galleries not working properly because the images are only on my main machine. And to be fair, I kinda got bored of the command line. Always having to activate a virtualenv, always fighting Vim's paste system, making sure image links are inserted correctly and so on.

So, having plenty of reasons to move away, I finally decided that during these two weeks of winder break, I will move to the Ghost blogging platform. I liked the UI. It should help with posting more easily, from anywhere. Hopefully this means I'll also post more often.

For now, I have moved only about 30 posts over to this new interface. Comments are enabled, but search is not.

If you see anything badly broken, leave a comment. If you really miss something, send me a message and I'll prioritize moving it over to Ghost.

To many new posts in 2020!

]]>
<![CDATA[Boardgames Party: Rival Restaurants]]>Rival Restaurants is a game I ordered on Kickstarter. When ordering from KS, it's always a bit of a gamble, but my wife loves cooking, so I figured she will like it. Last night we played it with some friends and it was a blast.

The playing board, with the
]]>
https://rolisz.ro/2020/01/02/boardgames-party-rival-restaurants/5e0f65cafc44843fc8be4498Thu, 02 Jan 2020 14:44:27 GMT

Rival Restaurants is a game I ordered on Kickstarter. When ordering from KS, it's always a bit of a gamble, but my wife loves cooking, so I figured she will like it. Last night we played it with some friends and it was a blast.

Boardgames Party: Rival Restaurants
The playing board, with the markets

In Rival Restaurants, each player has a restaurant (d'oh) and a chef and the goal of the game is to be the first one to get a popularity score (likes) of 20. To do this you need to cook recipes that get you likes. You get more points if you cook recipes that match your restaurant's cuisine.

Each recipe needs ingredients from different categories and you can only go to one market every day. Cooking generates garbage bags, which you have to dispose off, otherwise your recipes won't give you as many likes.

Boardgames Party: Rival Restaurants
Some of the chefs

You can barter with the other players. You can use various action cards to gain an advantage (or to put others at a disadvantage). Each chef has it's own super powers, such as blocking other players from accessing the market they are in.

Boardgames Party: Rival Restaurants
The Rising Sun, the Japanese Sushi Bar

The game is really well made physically. They blew through all their Kickstarter goals, so they added lots of extra nice touches. All the cardboard cutouts are really solid. And the coolest thing is that they have acryilic coins. I'm actually thinking about reusing them in other games too.

Boardgames Party: Rival Restaurants
Acrylic coins

Score: 9

]]>
<![CDATA[2019 in Review]]>This year something happened that has not happened in a long time: no posts at all during for two months (June and July) :( Most months I also managed to post only once, so because of this I wrote only 19 posts in 2019, 3 fewer than in 2018. Life has

]]>
https://rolisz.ro/2020/01/01/2019-in-review/5e0f65cafc44843fc8be4499Wed, 01 Jan 2020 11:02:00 GMT

This year something happened that has not happened in a long time: no posts at all during for two months (June and July) :( Most months I also managed to post only once, so because of this I wrote only 19 posts in 2019, 3 fewer than in 2018. Life has been quite busy, the con­struc­tion of the house eating up a lot of my time in the second half of the year.

The number of sessions finally stopped dropping: it was 10775 and 10762 the previous year. The number of users went up by 10%, to 8725. Pageviews continued dropping, getting down to 16382.

The day with the most views was November 30, when I had 226 pageviews. I posted about Tokaido then. It seems I should post more often about board games. In­ter­est­ing­ly enough, the week with the most views, dis­trib­uted out over several days was the week of July 21-27, when I didn’t even post anything. What happened then was that a certain YouTube video https://www.youtube.com/watch?v=zH­P9s­D0onU8 , which linked to my selfie time lapse photo post, got really popular.

My most popular posts were the neural network one (as always, for the last 5 years) and the time lapse one. The most popular one from this year was the one where I finally posted after two months of silence.

I’m planning to finally redesign my blog (and change how it’s generated). Hopefully I will also fill it with more and better content as well :)

]]>
<![CDATA[Cracking the human memory]]>I’ve never had a par­tic­u­lar­ly good memory, despite what others might claim. While many times others are astonished that I remember something, often I am just able to find a pattern or a logical reasoning behind something and then I can deduce it when needed.

]]>
https://rolisz.ro/2019/12/30/cracking-the-human-memory/5e0f65cafc44843fc8be449aMon, 30 Dec 2019 10:30:00 GMT

I’ve never had a par­tic­u­lar­ly good memory, despite what others might claim. While many times others are astonished that I remember something, often I am just able to find a pattern or a logical reasoning behind something and then I can deduce it when needed. Myself, I am astonished by the memorizing ca­pa­bil­i­ties of my smart wife, who finished pharmacy school a couple of months ago. Comparing her exams, where she had to memorize long chemical reactions, to my exams, where most of the time I had to memorize a couple basic things and then the rest could be derived, it’s clear she has a much better memory.

While in highschool, when I was doing lots of physics, going to com­pe­ti­tions and doing pretty well at a national level, I still had trouble memorizing some of the formulas I needed. I never actually invested time in memorizing them, instead re­mem­ber­ing some “logical things”, such as the focal length formula is all inverses, or the grav­i­ta­tion­al force is sym­met­ri­cal and is pro­por­tion­al to masses, so things must look so and so. Many of the formulas I eventually memorized, but simply by brute forcing it: I wrote them down while solving exercise so many times, that they ended up getting “burned” into my memory. Recently I helped my sister-in-law with her high school physics classes and in many cases as soon as I looked at a problem, I knew the answer, but figuring out the reasoning behind it and explaining it took much longer.

It’s a similar story with pro­gram­ming. I have been pro­gram­ming for more than 10 years. In particular, I’ve been working in Python for around 8 years. And yet I still can’t remember some basic APIs that I’ve used a 1000 times. Recently I had to look up how to delete a file in Python. Whenever I need to use the path ma­nip­u­la­tion API in Python, I have to look up the docs. Even in highschool, when I wrote my own PHP framework, I didn’t remember a lot of the API. Because Python has a fairly simple syntax, I don’t really have problems with it, but in other languages, that have more syntactic elements, such as Rust, I constantly kept forgetting even simple things such as how to declare an Enum.

Cracking the human memory

While these can all be ra­tio­nal­ized away, because Stack­Over­flow is one Google away and can tell us everything a programmer needs to know to solve “all” problems that appear on the job, there is another domain where I feel that my memory impaired me more. For a long time I have wanted to memorize more Bible verses. The Bible itself is full of ex­hor­ta­tions along these lines (Psalm 119:105: “I have stored up your word in my heart, that I might not sin against you.”, Deuteron­o­my 11:18 “You shall therefore lay up these words of mine in your heart and in your soul”). I’ve set this as a goal for myself many times (in 2016, in 2017 and in 2018), but every time I failed miserably. I tried writing flash cards with the verses that I wanted to memorize, but sooner or later I would forget about them. Or the verses would be too long and I would find them really hard to memorize at once. All kinds of problems.

But this year I ran into several articles about “fixing” our memory. One of them is from Michael Nielsen, a really smart guy, writing about all kinds of things, from se­ques­ter­ing carbon dioxide from the atmosphere, to quantum computing, to machine learning. In his article he describes a bit how memory works and how he is augmenting it to memorize all kinds of things. Among other things, he is using it to read scientific articles and understand them better, and to memorize pro­gram­ming APIs and he explains why it’s not a waste of time to do this.

Also this year, a visitor to our church issued a challenge to the youth to memorize a book from the New Testament. This really motivated me to finally do this, so I decided to learn the letter to the Galatians. I knew my previous attempts had failed, so I needed some new methods, such as the ones described by Michael Nielsen.

The Spaced Repetition method

I had previously heard about Spaced Repetition memorizing. I tried it for a little and I had some success for using it while learning German. But it requires effort to create your study decks, it’s not as simple as just reading something, so my past efforts quickly petered out. But this time I had the motivation (the challenge) and some additional in­for­ma­tion on how to best apply this, and I have suc­cess­ful­ly applied it for the last two months.

The idea behind spaced repetition is that memory decays along an ex­po­nen­tial curve and each time you recall something, you boost the strength of that memory and make the forgetting curve less steep. The best results are obtained if you recall something just as you were about to forget it.

Cracking the human memory

To exploit this, you have to review the factoids of knowledge that you want memorized on an increasing schedule. After you memorize it once, you review it the next day, then after 2 days, then after 4, and so on. If you make a mistake, you start again, by reviewing it the following day. This formula can be tweaked, if you have a hard time re­mem­ber­ing something, you might want to review it sooner, if it came to mind instantly, you can review it much later.

Some people have done this with paper cards and boxes, but now we have a much better tool that does all the scheduling for us: Anki. There are other Space Repetition Software (SRS) out there, but Anki is the best known, has a rich community, has many plugins, and in general gets the job done well.

Applying SRS to Bible verses

As I said, I had used Anki in the past, and I did learn German words with it. But I knew that that approach would not work well for memorizing Bible verses, simply because questions of the form “Galatians 2:2” -> “I went up because of a revelation and set before them (though privately to those that seemed in­flu­en­tial) the gospel that I proclaim among the Gentiles, in order to make sure that I was not running or had not run in vain.” are too com­pli­cat­ed. One small mistake and the whole card is gone.

The missing link was cloze questions. They enable you to take one longer text and break it up into smaller pieces that you will be tested on. Anki generates cards for each of the fragments you define and each card has one fragment missing.  Our previous example would become:

Galatians 2:2 {{c1::I went up because of a revelation}} {{c2::and set before
them}} ({{c3::though privately before those who seemed influential}}) {{c4::the
gospel that I proclaim among the Gentiles}}, {{c5::in order to make sure}}
{{c6::I was not running}} {{c7::or had not run in vain.}}

This would be turned by Anki au­to­mat­i­cal­ly into the following cards:

Galatians 2:2 ... and set before them though privately before those who seemed
influential) the gospel that I proclaim among the Gentiles, in order to make
sure I was not running or had not run in vain.
Galatians 2:2 I went up because of a revelation ... (though privately before
those who seemed influential) the gospel that I proclaim among the Gentiles,
in order to make sure I was not running or had not run in vain.

And so on for the other 5 fragments.

These cloze fragments can’t be generated randomly and I found that choosing them well is critical for memorizing a verse. If you make the fragments too long or if you break up on “odd” boundaries, they become much harder to memorize. For example, if in the above example I had

{{c1::I went up because of a revelation and set}} {{c2::before them (though privately}} 
{{c3:: before those who seemed influential}})

In this case, the second fragment “before them (though privately” would be quite hard to memorize correctly, because it actually belongs to two different ideas. Ideally, you have one idea per fragment. Of course, in some cases, an idea is too long and has to be split into multiple fragments. And sometimes, the boundaries are not as clear, but some logical con­sis­ten­cy must be seeked.

Results

I have had great success with this method. I have started learning the letter to the Galatians at the beginning of November. As of today I have suc­cess­ful­ly memorized the entire first chapter and the first eight verses of the second chapter, with around 5.2 minutes spent per day. In total I spent 4 hours memorizing this. I have tested myself outside of Anki, and I have an accuracy of about 90%. Most of my mistakes are things like mixing up “and”/”but” or sometimes the tenses of verbs.

I am astonished how well Anki works. I didn’t expect to be able to learn a text just like it. I like this new found power of memorizing things so much, that I actually started to apply it to other things. For example, last year I learned to solve the Rubik’s cube, but because I didn’t practice for a while, I kinda forgot. So I created an Anki deck with the steps. I have a deck for mis­cel­la­neous life things, such as the zip code where I live, which I always forget and I need to add it for online orders.

I also made some decks for pro­gram­ming languages. I am learning Rust, so I add there common things that I should know. I have a deck for Python, where I add things which I should already know, but I keep searching for.

As Michael Nielsen said, using spaced repetition “means memory is no longer a haphazard event, to be left to chance. Rather, it guarantees I will remember something, with minimal effort. That is, Anki makes memory a choice." I love having this choice, to be able to memorize things, so I plan to continue Anki for quite a while.

]]>
<![CDATA[The Rise of Skywalker]]>The grand finale for the Star Wars trilogy of trilogies is here. Or not so grand, because I went to see it in Oradea on the second day after it was released, and the cinema had at least a third of the seats empty.

I didn’t enjoy TLJ and

]]>
https://rolisz.ro/2019/12/25/the-rise-of-skywalker/5e0f65cafc44843fc8be449bWed, 25 Dec 2019 20:52:00 GMT

The grand finale for the Star Wars trilogy of trilogies is here. Or not so grand, because I went to see it in Oradea on the second day after it was released, and the cinema had at least a third of the seats empty.

I didn’t enjoy TLJ and I didn’t enjoy this one either, but for different reasons. The first one is the iconic scroller and the next five minutes: in the scroller we are told Pal­patine’s back (sorry, but this was known before the movie came out) and then Kylo Ren goes to meet him. Palpatine gives Kylo a mission in exchange for a fleet of ships and buh bye. That’s it. No ex­pla­na­tion, no nothing. Except a remark that Palpatine created Snoke. By doing this, they cheapen the ending of the Return of the Jedi. Vader’s sacrifice means a lot less, because he failed at getting rid of Palpatine.

The movie is very action packed. Almost all the time someone, usually Rey or Kylo (or both), are running around. In one sense, this is a good thing, because we see a lot of lightsaber fights between the two of them. On the other hand, Star Wars, with it’s grand ideas about light versus dark, about freedom versus tyranny and so on, would normally need to give some breathing space for viewers to process and understand what is going on.

The Rise of Skywalker

The visuals of the movie are absolutely gorgeous. They didn’t skimp at all on visual effects, neither for rendering exotic planets, nor for showing all kinds of alien species.

The movie is also full of throwbacks to previous movies, such as Lando Calrissian showing up again. It’s amazing how they still managed to get Leia in the movie, even though Carrie Fisher passed away before filming started, so they had to use scrap footage they had from before. We get some ex­pla­na­tions for things that left us puzzled in the TLJ, such as the fact that Leia had Jedi training, but she didn’t finish it.

There are several new characters, including two really cute ones: a new robot “D-0”, who seems to be the dumbest one so far, but makes for a couple funny jokes, and Babu Frik, the really cute robot repairer who messes around with C3POs memory.

Spoilers ahead

Un­for­tu­nate­ly, most of the film is not very memorable. I watched it 4 days ago, and I already can’t describe most of the middle of the movie. It involves a chase for some sort of a MacGuffin device, going from planet to planet following clues. Beautiful planets, some new characters, but the why and the what is for­get­table.

The Rise of Skywalker

The beginning is memorable, when Palpatine shows up. The end is memorable, but mostly in a facepalm way, when Palpatine is killed by Rey. Kylo/Ben Solo is also thrown off a cliff by Palpatine, but he survives, climbs back up, saves Rey from death, they kiss and then he poofs away into the Force.

One of the other reasons I’m quite so upset is how easily characters change heart. Kylo Ren for two and a half movies has been the bad guy, but now suddenly his mom whispers to him across the galaxy and he becomes a good guy? Or similarly, Rey is intent on destroying her grand­fa­ther, Palpatine, but then he reveals that due to new, out of the blue, Force shenani­gans, if she kills him with a lightsaber, he will “move” into her. She still plans to proceed, gets ready to strike him down, when she suddenly has a change of heart. Gimme a break.

Also, in TLJ, nobody came when the Resistance asked for help. Now, the sky is filled with ships when they called for help. I get that Lando might have a lot of friends, but not that many friends.

All in all, I was dis­ap­point­ed by this movie. I know it’s a sci-fi movie, but I still expect better from it. I expect more logic and con­sis­ten­cy in what can happen in universe. I think it was a mistake letting Ryan Johnson direct TLJ, so now J.J. Abrams had to correct the many mistakes he made, so he didn’t have time to make the cool Star Wars stuff.

Grade: 5/10

]]>