I am Nathan Bijnens, a developer with a passion for great code, the web and Big Data. I am interested in programming and system administration, especially where they meet, from scaling platforms to designing the architecture of new and existing products and everything in between.
I am focused on data analysis and building Big Data Applications. Using Hadoop, in combination with Hadoop Pig, Hive and Cascading. I follow the rise of real-time big data closely, actively developing applications on top of Storm. And designing Lambda-like architectures. The infrastructure side interests me as well, and I am learning more about Business Intelligence and visualizing big data. I advise on Big Data Strategies and evangelise Big Data to clients and at conferences.
I have a lot of experience with PHP, Java and other related technologies like MySQL, nosql, memcached, nginx and a lot more. I strongly believe in unit tests and design patterns, to write precise and easy maintainable code that works.
I am a passionate linux system engineer, follower of the devops movement. Using Puppet and Ganglia to automate and monitor deployments.
I am inquisitive, I love learning about new things and improving what I know. I am very passionate about what I do, and I have strong analytical skills.
Defining and implementing the architecture for Octopin, a Pinterest social media analytics startup. I designed and implented a Lambda Architecture, on top of Storm and Hadoop, using Redis, Voldemort, Cascading as well as Thrift.
Defining and implementing the architecture for hshmrk, a data visualization startup. The application backend is written as a Jersey REST (Java), service, using ElasticSearch as storage. The frontend is a AngularJS and D3 webapplication. This approach allows us to easily scale.
Developing Oracle database views for integration of Greencat and Crystal Reports.
I co-develop and I am the current lead on the IHarvest project. It is a distributed HTTP Fetcher & Parser on top of Storm, the results are stored on HDFS for more extensive querying using Hadoop.
I co-develop on our internal Semantic Analysis Engine on top of Storm and ElasticSearch. The frontend is being written in Jersey & AngularJS.
Responsible for the contact with Microsoft.
Creating a Drupal based website, hosted on Windows Azure. I touched all aspects of creating this website, from designing, implementing to copy writing.
Designing the DataCrunchers business cards and a company flag.
I developed our credit management web application in PHP, I managed a small group of developers, took the lead on everything technical and coordinated with the directors, partners and clients.
I virtualized and automated the whole iController server setup using Puppet. I created and extended several open-source Puppet modules.
Creating a small web application to organize and input subscriptions into Octopus.
Setting up the Hadoop infrastructure, analyzing data with Hadoop Pig and creating a dashboard using Symfony2 and Hbase & Thrift.
Creating a new PHP Framework (no existing frameworks were allowed) as base for the new dating site Twoo. I advised on Design Patterns and Best Practices. I developed the Security and ACL platform as well.
I introduced an open source Event framework and presented it to my co-developers. I introduced new application log functionality, for logging to Hadoop. I evangelized Unit tests, including an initial implementation and presentations. I evaluated Redis, Memcached & Membase (now Couchbase).
I am following the DevOps movement. I am using Puppet to automate and Ganglia to monitor critical infrastructure. I have open sourced and contributed to several Puppet modules.
I follow and try out with great interest Cloud related techniques and technologies, in all its forms: IaaS, PaaS, MaaS, … I have used as test or in production Amazon S3, Amazon EC2, Amazon MapReduce, Google BigQuery2 (private beta tester), Windows Azure Platform and Hadoop on Azure (private beta tester) and OpenStack.