- e4 d5
- exd5 Qxd5
- Nf3 Qd8
- Nc3 b6
- d4 Bb7
- Be3 e6
- Bb5+ c6
- Ba4 h6
- O-O Nf6

Already here things start to get weird, as I am trying to test out a new more aggressive style since my latest blitz games have been a bit boring.

- Ne5 Be7
- f4 a5
- a3 b5
- Bb3 a4
- Ba2 Nbd7
- Qd3 Nd5
- Nxd5 cxd5
- f5 O-O 18.

Here’s where the pressure starts being more concrete, castling allows the start of the attack and all my pieces are about to point right at the black king.

- Rf3 Nxe5
- dxe5 Bc6
- Rg3 Kh8
- Qd2 Kh7
- Rf1 Rc8
- Bb1 exf5
- Rxf5 g6

And it was in this position that i sacrificed, you guessed it, THE ROOOK. PLAYING C3, THE BISHOP COMES ALIVE, AND HE CANNOT TAKE BECAUSE IT IS MATE IN 9!!!

But the sacrifice is taken anyway, allowing the mate to be on the board.

- c3 gxf5
- Bxf5+ Kh8
- Bxh6 Rg8
- Bg7+ Rxg7
- Qh6+ Kg8
- Qxg7# 1-0

Never take rook sacrifices if the opponent has spent more than half a second on them is the moral of the story.

]]>Let be the number of such triplets for n. We can get a first recursive formula by noticing that if and then .

From this we deduce that

as one can easily check that the cases where at least one of numbers equals 1 is the integer part of n-1 divided by 2.

Also, notice there is a bijection between and the ordered triplets given by

Hence, we can apply Burnside’s Lemma on with the group of permutations of 3 elements.

In total we have elements, which we can divide into the conjugacy classes of: all different (1), exactly two are equal (2), exactly 3 are equal if possible (3). Only the identity fixes any element in 1, 2 transpositions fix the elements in 2, all 6 transformations fix the element in 3.

If n is not divisible by 3 we see, using the bijection between and the conjugacy class 1 (just multiply by 6, the number of possible permutations) we get the recursive formula:

If n is divisible by 3:

Substituting the previous recursive formula we found at the start we get an explicit formula.

If n is divisible by 3:

otherwise

]]>What’s the probability that all the red balls will be extracted before the white balls and that all the white balls will be extracted before the black balls?

There are two ways to do it, one is more complicated, the other quite simple.

Let’s start with the complicated one. Let’s calculate all the sequences fulfilling the condition and divide by the number of total sequences.

Obviously, the last ball must be black. Then, let’s fix the 12 red balls. There are 13 spots where the 15 white balls can stay, so the problem is equivalent to: in how many ways can we write 15 as a sum of 13 non negative factors, with the limitation that the last must be positive (there must be at least one white ball after the red ones). Also, it’s equivalent to writing 14 as a sum of 13 non negative factors. This is a very notorious problem (knives and balls), and the result is:

Now, apart from the last ball, 27 balls are fixed, hence there are 28 places where the remaining 16 black balls can stay. Hence, the problem is equivalent to writing 16 as a sum of 28 non-negative numbers, the same as before. The solution is:

Hence, the final result is:

Now, the simple solution is this: one black ball must occupy the last place and this happens with probability:

Also, considering the sequence of the red and white balls, a white ball must occupy the last place, and this happens with probability:

By multiplying these two results, we get the final probability:

]]>This was one of them.

1. Nf3 c5

2. d4 cxd4

3. Nxd4 Nc6

4. Nc3 Nf6

5. b3 e6

6. Bb2 Bb4

7. a3 Ba5

8. e3 O-O

Already, a very particular line: white has a fianchettoed dark square bishop in the sicilian defense. That won’t happen very often.

9. Be2 d5

10. O-O Re8

11. b4 Bc7

12. Ndb5 Bb8

My purpose is preserving the bishop pair in order to make it fully operational afterwards.

13. Nd4 e5

14. Nxc6 bxc6

Now I have a very nice center, and I can start getting my pieces closer to the king side.

15. Na4 Ne4

16. Bf3 Ng5

17. Nc5 Nxf3+

18. Qxf3 Qg5

19. Rfd1 e4

Closing the center, the white queen must retreat behind the pawn chain because my bishops conveniently control all the king side available squares.

20. Qe2 Bh3

21. g3 h5

22. Re1 h4

23. Nb3 Re6

24. Nd4 Rg6

25. Rac1

In this position, there is a mate in 6, enjoy trying to find it before I show my continuation!

25. Rac1 hxg3

26. hxg3 Bxg3

27. fxg3 Qxg3+

28. Kh1 Bg2+

29. Kg1 Bf3+

30. Kf1 Qg1# 0-1

]]>import numpy as np import matplotlib.pyplot as plt import matplotlib.image as im import random random.seed(123) l = [] a = list(range(1, 37)) random.shuffle(a) for i in a: if i == 35: continue image = im.imread('images/collagephoto{}.jpg'.format(i)) l.append(image) l1 = [l[i] for i in range(5)] l2 = [l[i] for i in range(5, 10)] l3 = [l[i] for i in range(10, 15)] l4 = [l[i] for i in range(15, 20)] l5 = [l[i] for i in range(20, 25)] l6 = [l[i] for i in range(25, 30)] l7 = [l[i] for i in range(30, 35)] final = np.vstack([np.hstack(l1), np.hstack(l2), np.hstack(l3), np.hstack(l4), np.hstack(l5), np.hstack(l6), np.hstack(l7)]) fig = plt.figure(figsize=(10, 14), dpi=308) plt.imshow(final) plt.axis('off') plt.savefig('collages/collage.jpg', bbox_inches='tight', pad_inches=0) plt.show()

The contour can be anything, from a heart to a circle, you just need to plot with the right function and save the image. Notice that the contour image must have the same dimensions as the collage. In this implementation, I’m assuming the color of the plt.plot function is [111, 0, 255] (a kind of purple).

This way I’m able to eliminate the noise from the contour by imposing a limit to the G value of the pixels (the one which should be 0 in the purple color).

Also, if you still have problems, there are the complete_x and complete_y functions that can fix the exceptions of your collage.

import matplotlib.pyplot as plt import matplotlib.image as im import numpy as np collage = im.imread('collages/collage.jpg') countour = im.imread('collages/countour.jpg') mask = np.any(countour != [255, 255, 255], axis=-1) collage1 = collage.copy() nice_points = [] for y in range(collage1.shape[0]): for x in range(collage1.shape[1]): if mask[y, x]: if countour[y, x][1] < 10: collage1[y, x] = countour[y, x] else: mask[y, x] = False cancel = 1 counter = 0 counter_check = 0 collage2 = np.zeros_like(collage1) def fill(collage1, n=1): global mask if n == 0: return collage1 else: temp_collage = np.zeros_like(collage1) for y in range(1, collage1.shape[0]-1): for x in range(1, collage1.shape[1]-1): a = mask[y, x-1] b = mask[y-1, x] c = mask[y, x+1] d = mask[y+1, x] if (not mask[y, x]) and a and b and c and d: temp_collage[y, x] = collage1[y-1, x]/4+collage1[y+1, x]/4+collage1[y, x-1]/4+collage1[y, x+1]/4 mask[y, x] = True else: temp_collage[y, x] = collage1[y, x] return fill(temp_collage, n-1) def check_position(y, x, collage): max_x = collage.shape[1] global counter_check if (not mask[y, x]) and mask[y, x-1]: for a in range(100): if mask[y, x + 1 + min(a, max_x-x-2)]: counter_check = counter_check + 1 return True return False def check_purple(y, x, collage): max_x = collage.shape[1]-1 c = 0 dp = 0 if mask[y, x-1]: for r in range(x, max_x): if mask[y, r]: if dp > 10: c = c + 1 dp = 0 else: dp = dp + 1 return c return 17 temp_collage = fill(collage1) for y in range(temp_collage.shape[0]): cancel = 1 for x in range(temp_collage.shape[1]): if mask[y, x]: counter = counter + 1 counter_check = 0 if counter == 1: cancel = cancel * (-1) collage2[y, x] = temp_collage[y, x] else: condition = (check_position(y, x, temp_collage) or counter_check > 0) if not condition: counter = 0 # print(cancel) if not 189 < y < 196: if check_purple(y, x, temp_collage) == 1 and not condition: cancel = -1 if check_purple(y, x, temp_collage) == 0: cancel = 1 if cancel == 1: collage2[y, x] = [255, 255, 255] else: collage2[y, x] = temp_collage[y, x] def complete_x(x_min, x_max, y, direction=1, delete=False): for x in range(x_min, x_max): c = 0 while not mask[y+direction*c, x]: if delete: collage2[y + direction*c, x] = [255, 255, 255] else: collage2[y + direction*c, x] = temp_collage[y+direction*c, x] c = c + 1 def complete_y(y_min, y_max, x, direction=1, delete=False): for y in range(y_min, y_max): c = 0 while not mask[y, x + direction*c]: if delete: collage2[y, x + direction*c] = [255, 255, 255] else: collage2[y, x + direction*c] = temp_collage[y, x + direction*c] c = c + 1 fig = plt.figure(figsize=(10, 10), dpi=300) plt.imshow(collage2) plt.axis('off') plt.savefig('finalcollage.jpg', bbox_inches='tight', pad_inches=0) plt.show()

P.S. Happy Easter!

]]>It was released in 1986, and it’s a very particular kind of shred, where Chick and his band ‘show off’ their technique.

Its live versions are very nice, and the characteristics of these exhibitions are: they are usually very long and every musician has his own part where he can go all out with an astounding solo.

How to improvise on such a piece? Pentatonic scales are a key factor here: they can easily make the most interesting tensions of the chords stand out. Also, important scales (to use on dominant chords) are the semitone-tone and the super-locrians.

In the end, this piece is ideal if your intention is to improve your proficiency in fast playing.

]]>There are m distinct positive even integers and n distinct positive odd integers that add up to 1987. Find the maximum of 3m + 4n.

So we have this sum of integers

We can write that

Now we can exploit the Cauchy-Schwartz inequality to make our desired coefficients appear.

Hence,

Hence the maximum is 221, which can be achieved with

]]>Today’s subject is bashing sums without having to do any calculation.

Let’s say we have this sum:

One way to do it would be differentiating , then multiplying the result by x and differentiating again. Then we get an expression with variable x, and by substituting 1 we get the desired result.

But we are artists, not butchers. So, think about this. The problem is equivalent to choosing a subset of people among 100, and then choosing a president and a vice president among them (president and vice president can be the same person).

Hence, we count it another way. Let’s first choose the president and vice president. Remember that is the number of subsets in a set with n elements.

If they are the same person, then we get total cases, if they are not we have cases.

Summing them, we get the result.

]]>I played black, and as I usually do when e5 is played, I responded with c5, going for the Sicilian Defense.

1. e4 c5

2. Nf3 e6

3. d4 cxd4

4. Nxd4 Nc6

5. Nxc6 bxc6

6. Be2 Bb7

7. Nc3 Be7

8. Be3 d5

9. exd5 cxd5

10. Bd4 Nf6

11. g4 h6

At this point, I could have gone for the immediate e5, and after Bxe5, d4, Bf3, Bxf3, Qxf3, dxc3 I would have won a piece.

12. h4 Rg8

Again, there was the option of e5.

13. Bb5+ Nd7

14. Be5 Kf8

15. Bxd7 Qxd7

16. Qf3 f6

Here, there was the option of d4, and after Ne4, Qd5 I would have won a piece.

17. O-O-O Ke8

Again I could have played d4.

18. Bg3 d4

Finally.

19. Qd3 Bxh1

20. Nb5 Kf7

Here Kf7 was really unnecessary, I could have calmly played Bd5 to save the piece.

21. g5 Bc6

22. Nc7 Rac8

Here I started playing very precisely, and, in fact, it didn’t take much time to convert the position into a win.

23. gxf6 Bxf6

24. Na6 Bb5

25. Qa3 d3

26. Nb4 Rgd8

27. Kb1

Here’s the position where a mate in 2 is possible, it’s not difficult to spot, but, here it is.

27. Kb1 dxc2+

28. Kc1 Qxd1#

0-1

]]>import tensorflow as tf

from tensorflow import keras

import numpy as np

filepath = 'dante.txt'

with open(filepath) as f:

dantext = f.read()

print(dantext)

tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)

tokenizer.fit_on_texts(dantext)

[encoded] = np.array(tokenizer.texts_to_sequences([dantext])) - 1

dataset_size = tokenizer.document_count

max_id = len(tokenizer.word_index)

train_size = dataset_size

dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])

n_steps = 100

window_length = n_steps + 1

dataset = dataset.window(window_length, shift=1, drop_remainder=True)

dataset = dataset.flat_map(lambda window: window.batch(window_length))

batch_size = 32

dataset = dataset.shuffle(10000).batch(batch_size)

dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))

dataset = dataset.map(lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))

dataset = dataset.prefetch(1)

print(dataset)

model = keras.models.Sequential([

keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id], dropout=0.2, recurrent_dropout=0.2),

keras.layers.GRU(128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2),

keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation='softmax'))

])

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')

checkpoint_cb = keras.callbacks.ModelCheckpoint('nldante.h5')

history = model.fit(dataset, epochs=20, callbacks=[checkpoint_cb])

model.save('nldante.h5')

This program divides the dataset into windows: from 100 characters, they learn how to predict the 101st. Obviously varying such a value can affect the program and the patterns it’s able to learn. You can try yourself to see how outputs change. Also, we use batches to make the program faster and more perceptive.

The GRU layers are nice when processing sequences, some of the best you can employ for these purposes. Also, you could try using the LSTM layers instead, but I think that, for this purpose, the GRU layers are better.

Finally, the TimeDistributed layer applies the Dense output layer to different timesteps independently.

Let’s see how to make predictions now.

import tensorflow as tf

from tensorflow import keras

import numpy as np

model = keras.models.load_model('nldante.h5')

filepath = 'dante.txt'

n_chars = 200

with open(filepath) as f:

dantext = f.read()

tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)

tokenizer.fit_on_texts(dantext)

max_id = len(tokenizer.word_index)

def preprocess(texts):

X = np.array(tokenizer.texts_to_sequences(texts)) - 1

return tf.one_hot(X, max_id)

def next_char(text, temperature=1):

X_new = preprocess([text])

y_proba = model.predict(X_new)[0, -1:, :]

rescaled_logits = tf.math.log(y_proba) / temperature

char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1

return tokenizer.sequences_to_texts(char_id.numpy())[0]

def complete_text(text, n_chars=50, temperature=1):

for _ in range(n_chars):

text += next_char(text, temperature)

return text

print(complete_text(' ', n_chars=n_chars, temperature=0.2))

So, the next character is chosen by considering the probability that the AI outputs. By reiterating such a process, one can generate sentences. You can fix how much you want the probability to be important in the choice of the next character with the temperature parameter: a value close to 0 will favour the high probability values, while a high temperature will give more space to other characters as well.

Here’s a sample output:

con la figlia da l’altro aspetto

di questo viso che s’accorsa di sua favilla.

non puoi che tu veder le parole,

per che si convenne la mente templante

che s’interna di quel che s’interna vista,

per ch

This really seems Dantesque, don’t you think?

]]>