Social Network –

Current version of

Current version of


  • Among the first social Network for Minors
  • Live in 2007


Saywire is one of the first social networks for minors complying in large part with COPPA. In fact, it was started even before the term social network came into common use. My company was hired to provide remote web application development services, both front and back ends. We set up and maintained the code repository and automated deployment into staging and production environments, including the ability to roll back if needed. We provided PHP and corresponding MySQL, HTML, and JavaScript for large portions of early versions of the web site and provided input on how best to use the Fusebox web application framework.


Early versions of were based on Fusebox in a LAMP environment. We used Subversion as our code repository and incorporated a variety of client-side technologies (CSS2 and JavaScript).


Using Keys to Group Related Elements in XSLT

Although I’ve been using XSLT on a variety of projects for nearly 10 years now, I’m still stuck using XSLT version 1.0 only processors and frequently turn to “keys” to solve complex grouping problems. A few weeks ago I was presented with a grouping problem that put my knowledge to the test and forced me to fully understand how keys work and how best to set them up. Thanks to the fabulous people on the XSL mailing list, I was given lots of valuable feedback and pointed in the right direction. I feel it’s only fair to share what I learned: the USE attribute is the XPath to the node whose data you want to use as the key (the “grouper” if you will). Furthermore, the XPath is relative to the ELEMENT in the MATCH attribute.

Normally it’s as easy as specifying an attribute to use as the key but let’s consider an example in which all the elements are the same and the only thing that uniquely identifies them is their location relative to each other. Consider a table with the following structure (TD elements with nothing below them have ROWSPANs set):

td td td td td td
td td td td td td
      td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
td td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
      td td td td
   td td td td td
td td td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
      td td td td
      td td td td

In a KEY element, you identify the source element you wish to capture (TR in this case) in the MATCH attribute. In the USE attribute you specify an XPath statement indicating the node the matched element will belong to (be grouped by). This is fairly straight forward when the input is well defined, but in a situation like ours, where structure is the only thing we can key off of, and the structure itself is somewhat amorphous, it can be quite difficult to write the proper XPath.

Let’s say the first row is the header row. It should not be included in the output as it simply contains labels for the columns. Rather than accounting for it in the key element, we’ll simply skip it when applying templates (select="//tr[position() > 1]").

The key (or group name) for the matched elements will come from rows containing 6 or more TD elements. Specifically, the data will come from the first TD element in those rows. The XPath would be something like this ancestor-or-self::tr[count(td) >= 6]/td[1]. Unfortunately, this code only groups rows in which there are 6 or more TD elements. Rows with 5 or less TD elements are left out of the result set. For rows with 5 or less TD elements we will need to look up in document order and stop at the first row above them containing 6 or more TD elements. This is where it gets complicated… This could be solved with some sort of IF-THEN-ELSE construct but since we’re using XSLT, that’s not the best approach.

Instead, we’re going to capture ALL the potential keys above the current row and filter out the ones we don’t need.

To “look up” we use preceding-sibling: ancestor-or-self::tr/preceding-sibling::tr[count(td) >= 6]. This will give us ALL the rows with 6 TD elements preceding the currently matched row. However, we only want the row [with 6 TD elements] that immediately precedes the currently matched row, and not all of the preceding siblings. Thus we append: [position() = count(.)] which gives us the last item in the set (not exactly sure why last() doesn’t work here, but it doesn’t) followed by the first TD element in the row: /td[1].

Finally we filter out the nodes we don’t need. We do this by joining the statements via the pipe character | and enclosing the whole thing in parentheses, and from that result set we take the very last element: [last()], which is exactly the key we are looking for. Here is the final key element:


	use="(ancestor-or-self::tr[count(td) &gt;= 6]/td[1] 
		| ancestor-or-self::tr/preceding-sibling::tr[count(td) &gt;= 6][position() = count(.)]/td[1])[last()]" 


Because it’s hard to see the result of such a complex XPath, I first run the transformation using a template match on TR elements and copy-of the results to the output. That way I can see what my XPath is actually producing. Once I’ve got the set I’m looking for, I move it into a key element.

They say you don’t really know something until you can explain it to someone else. I’m not sure if I’ve succeeded in explaining it or not, but I feel like I’m much, much closer than I was when I was presented with this problem a few weeks ago.

My XML input.

My XML output.


Store –


  • More than 10,000 items online
  • Highly SEO Optimized
    • SEO Friendly URLs
    • Main content delivered first in document order, navigation last
    • Extensive use of CSS sprites
    • All HTML, JavaScript, and CSS files minified and gzipped
  • Fully automated integration with backend database, including roll-back capabilities
  • Fully automated connection between products and their images
  • Fully automated watermarking of images
  • Custom store administration interface for new items

Description is a “bricks to clicks” solution for Carolan, a party supply store in the Canary Islands. With more than 40,000 items in their inventory, Carolan needed a online store to expand their presence to other parts of Spain. Furthermore, they wanted to improve their ranging in the search engines and sell, sell, sell!

Technical was a Zen Cart store with a custom theme and a few minor tweaks to improve search engine optimization. On the back end, the server would receive updates to articles in XML and update the Zen Cart database every few minutes with the changes. Likewise, whenever an item was sold, an XML file was sent to the backend database to keep inventory accurate. This site made extensive use of Bash scripts to move the data from XML to MySQL and to manage the creation of images with their watermarks (a minimum of 4 sizes per image). During the busy season, the site was very busy with more than 10,000 visits per day. You can read more about this project here (in Spanish).

What is an Expert Computer User?

The computer is, primarily, an advanced communications tool as evidenced by the fact that nearly all human/computer interaction ends with the creation of a document to be consumed by others. The expert computer user is technically eloquent and efficient, capable of communicating subtleties in a variety of media including text (email, chat, online forums, wiki, blogs, newsletters, “PowerPoint” presentations, reports), graphics (graphs, charts, diagrams, digital images and video, animation), audio (podcasts, soundtracks), and a combination of these (interactive multimedia). Even applications such as airline reservations systems are ultimately just messages sent from the customer to the airline stating all the facts necessary to reserve a seat on a plane.

The expert computer user is defined by her level of mastery of computer mediated communication and not just on her knowledge of software or hardware. She is liberated by the computer as a communications tool, not burdened by it. She looks for, and recognizes, consistent user interfaces in all the systems she encounters. She takes risks when using automated systems to increase her efficiency and isn’t disheartened when something doesn’t respond as expected, in fact, she always has a backup plan. She has a clear idea of what she wants to say and how she wants to say it. She has a clear understanding of who her audience is including their level of computer expertise. She understands and can elaborate on the strengths and weaknesses of communicating a message in one medium over another. We recognize this user by her adeptness and lack of fear when interacting with computers.

The expert computer user types with all 10 fingers and without looking at her hands at more than 50 words per minute. She uses keyboard shortcuts rather than reaching for the mouse. She is autonomous. She searches the internet for answers and ponders the authority with which the answers are written prior to taking action. She synthesizes and shares her findings with her peers via email, chat, blogs, and wikis or any media she feels will best communicate her intended message. She participates actively in a variety of online networks sharing and garnering knowledge in a variety of subjects, not just computers.


Whenever I’m asked for help, I’m always trying to raise the level of computer literacy to something close to the above description. Unfortunately, most people aren’t interested in climbing quite so high and alas, my hopes are almost always dashed on the rocks of unrealistic expectations. Have you any interest in raising your level of computer literacy?

SEO Test for 10th Graders

In my 10th grade IT class, my students are learning the difference between semantic markup and non-semantic markup. As a test, we created a fake product, pezrasine, and asked each student to create a web site consisting of a few pages advertising the product. The goal is to see which of their sites appears first in google when searching for this invented word/name.

As one would expect, they had a lot of problems producing technically correct pages using Amaya as their only editor, but for this test, that shouldn’t matter. In fact, it will be interesting to see if google penalizes technical difficulties…

So, if you have a free minute and want to learn about an amazing new hair gel called pezrasine, follow the link and click any of the two-letter links that follow.

TFTP ARP Timeout LTSP Ubuntu

Just a quick note for those experiencing the same issue. After a fresh install of an LTSP server from the Ubuntu 10.10 (Maverick Meerkat) alternate CD I was unable to connect from any of the thin clients. I kept getting a TFTP timeout (but DHCP was clearly working).

After checking all the variables mentioned in this article, I discovered that the filename for pxelinux.0 in /etc/ltsp/dhcpd.conf ended in .tmp as in: filename "/ltsp/i386/pxelinux.0.tmp";. I don’t know if this is a bug in the installation program or what, but removing “.tmp” worked like a charm and everything is now up and running, and I’m thrilled!

Working Remotely

After 7 years of working as a Web Developer remotely from the island of Gran Canaria (and nearly 20 years in some IT related position), I started teaching IT to high school students here in the Canary Islands. Working with teens has been an eye-opener, to say the least…

More than 50% of my students had never used email and had never heard of Netiquette at the start of the school year. Although the curriculum from prior years included the creation of PowerPoint presentations, writing blogs, and modifying HTML, not one student knew how to set a margin or a tab in a word processing application. I was agahst! How could such gaps in basic IT knowledge be tolerated? Where was the curriculum designer? Who gave all these kids email addresses without making them take (and pass) a test on Netiquette first?

To their credit, what they did learn (creating videos, for example) they learned pretty well. Nevertheless, in the business world (and for the foreseeable future) formal business communication (contracts, proposals) takes place in writing, not video, and via email, not via Tuenti. Furthermore, these students, moreso than those who came before, absoultely MUST master computer mediated communication if they ever hope to succeed in their careers.

For these reasons I decided to conduct a series of interviews with some of my former (and present) clients, co-workers, and related software developers. In these interviews we discuss a variety of aspects of working remotely. Most of the people I spoke with coincided on one thing in particular: being able to express yourself clearly, in writing, is the deciding factor of whether or not to work with you. One of the interviewees put it this way: “I am going to quickly look for ways to eliminate 95% of [the resumes that cross my desk].” Expressing yourself poorly in writing makes you a likely target for elimination and this series of interviews is intended to drive that point home.

Now that I’ve edited down the videos and watched them all myself, I’m surprised how consistently the following themes came up:

  • There must be trust between both parties, but it’s not that hard to achieve.
  • Expressing yourself clearly and effectively in writing is crucial to your success.
  • Most problems that arise are the result of a lack of trust.

The café where I recorded most (but not all) of these interviews was my favorite corner café here in Las Palmas: Coffee Break.

The interviews that follow have been edited down to fit within the 15 minute maximum allowed by, but there was a lot of great stuff left on the cutting room floor… Click the names of each person to watch the video and enjoy!

Desarrollo de una tienda web

Desde verano de 2009 he estado desarrollando la tienda web de Carolan, una tienda de disfraces y otros artículos de fiesta con sede en Las Palmas de Gran Canaria. Ya he perdido el control de las horas totales que he invertido en este proyecto porque se ha convertido en una obsesión por cumplir con las “mejoras prácticas” (best practices).

El pensamiento corriente dice que cuando uno quiere montar una tienda web, coge el sistema de software libre de moda y la monte sin más. La alternativa (desarrollar una tienda a medida, desde cero) seguramente me habría costado el doble, ¿o no? De esto se trata este artículo…

Carolan vende artículos de fiesta y productos de un solo uso. Tienen una gran variedad de artículos como disfraces, copas, servilletas, guirnaldas, decoraciones y mucho más. En realidad tienen más de 20.000 artículos catalogados. Cuando empecé a trabajar con el equipo de Carolan, uno de sus principales deseos era crear una conexión “viva” entre su base de datos y la tienda web para minimizar gastos e incrementar la eficacia. Otro deseo importante era que la web fuera fácil de usar (y los artículos fáciles de encontrar) y dinámica con una portada que variara según la temporada y que la tienda saliera bien en los resultados de los buscadores.

Analizando sus deseos / necesidades determiné que la mejor opción era la adopción de una tienda de software libre. Creía que me ahorraría montón de tiempo pero ahora no lo tengo tan claro y quiero saber qué opinas tú. Aquí resumo las modificaciones que se han tenido que hacer desde que empecé con la instalación de la tienda. Ten en cuenta que no detallo TODAS las modificaciones sino las más gordas / importantes.


Por defecto Zen Cart viene con una opción que te permite hacer las URLs “search engine safe” (legibles a los buscadores). Básicamente convierte
(o algo por el estilo). No está mal pero hoy en día este tipo de transformación no es necesaria.

Lo que sí quería era convertir la URL en algo como /disfraces-y-complementos/disfraz/adulto/mujer/disfraz-abeja-adulto.html para tener palabras claves en la URL. Lo logré con un par de modificaciones al código base de Zen Cart y algunas transformaciones via mod_rewrite. Me ha costado perfeccionar el método pero de momento parece estar funcionando más o menos bien.

Etiqueta Title

El valor de la etiqueta Title por defecto de Zen Cart no se presta a una buena indexación por parte de los buscadores. Tuve que modificarla unos cuanto sitios para que fuera siempre único (no repetido) ya que valores repetidos minimiza la usabilidad.

Optimización de Velocidad y de Código Fuente

Cada vez más la velocidad de una web determina su ranking dentro de los resultados de una búsqueda. He intentado cumplir con todas las sugerencias de Yahoo! and Google referente mejorar la velocidad de la web. Según mis cálculos, la web empezó cargándose en 7 segundos (sin cache, incluyendo lentitud de la red). Tras estos cambios, las peticiones terminan en menos de 1.5 segundos y para implementar estas sugerencias tuve que tocar:

  • el diseño general: se implementaron sprites, se combinaron hojas de estilo y ficheros .js
  • el diseño del perfil de un artículo (minimicé la cantidad de HTML generado por la tienda)
  • el diseño del buscador (se mejoraron el diseño de los enlaces a las páginas siguientes, se modificó la presentación de resultados, se modificó la etiqueta Title de los resultados para incluir los nombres de los productos y no solo el número de la página, se creó un Índice de productos)
  • se modificó el diseño, y el proceso, de “checkout” (pasar por caja)
  • se eliminarón los caracteres “codificados” (&aacute; se convirtió en á)
  • se redujo el número de peticiones por página de 40~ a menos de 20
  • se implementó el uso de dominios sin cookies y de una Red Distribuidora de Contenidos (Content Distribution Network – CDN)

En fin, tras todos estos cambios y teniendo en cuenta que sólo utilizan un método de pago, me pregunto si realmente me he ahorrado algo usando zen-cart (lo cual supuso una inversión de tiempo para aprenderlo). ¿Qué opinas tú?