How to Scrape Web Pages using ScraperWiki? | Python, ScraperWiki, HasGeek, Jobs

By Scraping, I mean getting data from web pages in a programmable way. For example, Check out Anand’s post about how he scraped for a laptop on Flipkart. That’s how I came to know about ScraperWiki and recently I wrote some quick and dirty scrapers to help a friend. Thought of sharing that knowledge which might help you write simple scrapers.

Scraperwiki logo

Yes, scraperwiki makes it easy. Along with nice methods, it also gives you an infrastructure for your data processing. Even if you have a laptop of last generation, you can type in the code and let the scraperwiki does your job of processing large amount of data while you sit back and relax. Scraperwiki provides you three languages to choose from.

  • PHP 5.3.5
  • Python 2.7.2
  • Ruby 1.9.2

All software you need will be your browser. Rest will be taken care by Scraperwiki unless you got any special need. Better place to start will be the Documentation and Live tutorials. It will give you what you need for getting started.

For starters, let’s scrape jobs from hasgeek job board. Know your page, before you scrape it! The HTML structure of the page we are going to scrape is like this:

Every job is posted in a stickie (the yellow boxes like post-it notes you see on this page) which could be a single or group posting. Now, check the html source more closely to know what are these really made of. i.e. What makes a stickie? It’s an li tag. Likewise, get to know all the elements that are involved in your scraping goal. A single stickie is a bunch of span elements inside an a which is wrapped by an li tag.

<li class="stickie">
	<a href="/view/f5ai9" rel="bookmark">
		  <span class="location">Mumbai</span>
		  <span class="date">Sep 27</span>
		  <span class="headline">Android+Java hacker at Mobile Payments startup by IIT-IIM founders</span>
		  <span class="company">PayMe</span><span class="new">New!</span>
	</a>
</li>

Group stickies contain more than one job posting. These stickies also of span elements but a slight difference is the first stickie of the group is inside an anchor while the others are inside a div. See below:

<li class="stickie grouped">
	<a href="/by/d52f64f84d243b73dc01b15738520375">
		<span class="location">Bangalore</span>
		<span class="date">Sep 28</span>
		<span class="headline">Search, Relevance Architect,Bangalore</span>
		<span class="company">eCommerce</span><span class="new">New!</span>
	</a>
	<div class="stickie grouped under">
	    <span class="location">Bangalore</span>
	    <span class="date">Sep 28</span>
	    <span class="headline">Demand Generation, Relevance Engineer(Bangalore)</span>
	    <span class="company">eCommerce</span><span class="new">New!</span>
  	</div>
  	<div class="stickie grouped under">
	    <span class="location">Bangalore</span>
	    <span class="date">Sep 28</span>
	    <span class="headline">Big Data, Systems Engineer in Bangalore</span>
	    <span class="company">eCommerce</span><span class="new">New!</span>
  	</div>
</li>

We are going to extract information from both single and grouped stickies. Below 20 lines of code is my attempt to do that.

# HasJobs Scraper
# Scrapes from HasGeek Job Board
# http://jobs.hasgeek.in

# By Santhosh Kumar Srinivasan
# Fork at https://github.com/sanspace/scrapers.git

# To run at scraperwiki.com
# or locally using http://blog.scraperwiki.com/2012/06/07/local-scraperwiki-library/
# or https://github.com/scraperwiki/scraperwiki_local

# Change src at line 46 to any desired job board URL such as
# http://jobs.hasgeek.com/category/programming
# or http://jobs.hasgeek.com/type/freelance
# to fetch categorized jobs instead of everything

import scraperwiki
import lxml.html

def save_data(elem, jobs):
    # Get all the span elements which has the data we look for
    for span in elem.cssselect('span'):
        jobs[span.attrib['class']] = span.text_content()
    # Saving to the DB. Needs a dict and a unique key
    print scraperwiki.sqlite.save(unique_keys=['link'], data=jobs)

def scrape_content(url):
    html = scraperwiki.scrape(url)
    root = lxml.html.fromstring(html)
    # select all the stickies except the first one
    # Technically siblings of the first stickie that says POST A JOB
    # Refer http://api.jquery.com/next-siblings-selector/ 
    for job in root.cssselect('ul#stickie-area li#newpost ~ li'):
        jobs = dict()
        # Have to build the URL as the anchor is relative
        jobs['link'] = url + job.cssselect('a')[0].attrib['href']
        if (job.attrib['class'] == "stickie grouped"): # group postings
            # Get all direct children of the grouped stickie
            # Refer http://api.jquery.com/child-selector/
            for elem in job.cssselect('li > *'):
                save_data(elem, jobs)
        else:
            save_data(job, jobs)

# Let's get started
src = 'http://jobs.hasgeek.in'
scrape_content(src)

Check out the output. Create a new scraper using this code at ScraperWiki or just fork this scraper to try yourself.

The function scrape_content gets the html content from the URL. Then it extracts all the postings from the page. This involves selecting desired elements from the html content using css selectors. This is just like any css selector you use in your stylesheets or in your jquery code.

For each posting, it then calls save_data to get the desired data from the posting, If it is a group posting, then save_data is called for each posting in the group. This function simply extracts all the data from the stickie contents which are elements. Then, it calls a scraperwiki function to save this to the sqlite DB.

A bit of python knowledge is enough to carry out this one. Also, many third-party libraries are available for advanced users. It’d be fun to scrape some of your favorite sites. Let me know which site you are going to scrape.

flattr this!

Automattic – Sanspace Blog | WordPress, Akismet, Jetpack, Gravatar, Coraline

This blog has been up for more than a year now. WordPress is an amazing package which gives you all that you need for a blog. As this blog got so much from Automattic, the makers of wordpress and few more amazing things, it’s time to show some love.

Automattic

We are passionate about making
the web a better place.

Automattic Logo Image

If Automattic sounds new to you, then please be informed that below are some of the products of Automattic and it has more, some of which you might have probably been using. Automattic has always been my favorite. If I see something new from Automattic, I just start using it without giving much thought to it. This is what the founder says,

“We are much better at writing code than haiku.”
- Matt Mullenweg, founder of Automattic

WordPress

Hassle-free blogging
even with your own domain
freemium model.

Wordpress Logo Image

This doesn’t need much detail. Numerous sites and blogs you visit daily runs with it. In the lifetime of this blog, I doubt I would have done better with any other CMS / blogging platform. This site owes wordpress for that.

Akismet – Adios comment spam

Remember the days
innocent inboxes gleam
be spam-free again.

Akismet Logo Image

You need a lot of plugins to make your blog survive, consider Akismet here as most important of those. Without this wonderful plugin you may end up with thousands of spam comments and links. Any akismet user would be grateful to the plugin for what it does. I would say,

A wordpress installation without akismet is incomplete!

Gravatar – Globally Recognized Avatars

Identity is
visually portable
your face everywhere.

Gravatar Logo Image

Whenever you comment somewhere or sign up somewhere, Gravatar reduces your job of uploading an avatar for your profile on that site. It enables you to keep your graphical identity across sites. Also, it facilitates the site owners to enhance their site’s look and feel with the user avatars though the users are lazy to upload their avatars. There is also a simple API available for Gravatar, if you’re a developer. Once I did something with it too.

Gravatar allows you to censor your avatars yourself and give it a rating. It avoids the kind of incidents such as Google plus taking down avatars violating their policy.

Jetpack by wordpress.com

Power of the cloud
right there in your own WordPress.
Supercharge your site!

Jetpack Logo Image

Jetpack supercharges your self-hosted WordPress site with the awesome cloud power of WordPress.com. Jetpack is a bundle of essential plugins that makes your wordpress site awesome. It’s one of the good to have plugins out there.

Coraline – by Automattic

I have used few themes and so far I have found Coraline best. This has a lot of customization options. So, do let me know how you would like to view this blog, Wide screen? one column or two columns, darker or lighter? It provides a lot of options.

I look forward to more great products from automattic. Once again, thanks for the wonderful products, automattic!

flattr this!

Miles to go this 2012

Resolutions are not really meant to be followed. Most of the diaries are empty except a few weeks of January. Still, people get a diary every year.

Remember my 2011 resolutions? The three roles have been somewhat good.

Developer

Nothing much was done. Read some good books.

Did a few things.

Everything was done with the help of some well-written tutorials(unlike mine ;-)).

So, what about 2012? Need to divert the learning and development in a more structured way. With Nettuts guidance, I am going to

  1. Learn a New Language, Framework, Or Methodology – Python / Perl, Django
  2. Get Better At What I Know – C, Unix, PHP
  3. Explore a New Field – Web-designing
  4. Engage the CommunityBitbucket, Stack Exchange
  5. Teach Others – Tutorials
  6. Manage My Time (and Other Resources) Better – I should learn this. :-(
  7. Use Better Programming PracticesI am already, still..
  8. Generate Passive Income – At least, I should make this site self-sufficient!
  9. Take a Break – Easy one, I guess.

I hope for an interesting year ahead.

Blogger

2011 has been a better year than 2010 for this blog. I came out of the block sometimes and wrote a few posts. As per jetpack,

A San Francisco cable car holds 60 people. This blog was viewed about 1,400 times in 2011. If it were a cable car, it would take about 23 trips to carry that many people.

I should increase the frequency in 2012 and should write a post every month at least. Same time, the quality of the writing should be increased too. Even, I don’t understand some of my posts. The diary thing is in this year, too. ;-)

Teacher

This was the hardest role to fit in. I got a few opportunities and did good (or I guess so). I should involve in this more in 2012 and make it count. I’ll be writing more tutorials and I could definitely use some feedback! :-)

dart-board-image

Other than that, some miscellaneous but important resolutions are to,

  • Go green – as much as possible
  • Get healthy – minimize the elevator usage
  • Serve society – should get more social responsibility

That’s all for now! Let’s see how 2012 opens up. Hope the world doesn’t end! :-D Happy New Year!!

Darts Image: Patchareeya99 / FreeDigitalPhotos.net

flattr this!

Set Cron job online | cron, job, crontab, linux, unix, scheduler, automate, bot

What’s cron?

A cron is a scheduler on UNIX-like systems which helps a user to schedule tasks (which we call as jobs). A user can automate repetitive tasks using cron jobs. For example, user can schedule a program to run on every sunday. Lot of things everyday happens with help of cron. For instance, at sanspace, our very own tweet bot tweets everyday because of cron.

HostSo to iPage

Initially sanspace was hosted at HostSo, which provided a cPanel, an amazing site configuration and management tool. However, sanspace was moved to iPage. After the migration I realized iPage did not have cron feature. That was the time, I started looking for online cron jobs, needless to say at free of cost. Google listed some sites and I started with SetCronJob.

Online Cron Job

SetCronJob is a reliable cron job service with simple webcron interface. As they say, it’s an easiest way to set up cron jobs. Just set up a cron job as you do on your own site and it will work as you wish. Checkout the below screenshot:

SetCronJob-Control-Panel-Screensh

Pros:

  • Simple Interface
  • Performance as expected
  • 100% free

Cons:

  • 1 month expiration limit (Need to renew your account every month)
  • No logs (available only for premium accounts)

I have used it for some time and it’s working quite fine so far without any problems. There might be some better services. Let me know if you know one.

References

Cron reference: simple | advanced
SetCronJob FAQ: How to set up a cron job?

Clock Image: digitalart / FreeDigitalPhotos.net

flattr this!

Expand short URL – Simple PHP app for beginners | PHP, HTTP, cURL, Request, Response, Headers

This post is for beginners in PHP. Beginners, who I assume, have said hello to the world. I believe learning is doing. As advised here beginners should start developing something to learn. Let’s do something simple today.

What we are going to do is, to expand any short url. For example, http://goo.gl/YpDP4 should expand into http://blog.sanspace.in. Goal is set. Now, the tools for the job. What are all do we require?

  • A server which can serve PHP
  • A text editor – notepad will do

You should already have a server which serves PHP as you’ve already said hello. Now, the core job. We are going to expand short URLs into long URLs. goo.gl, bit.ly, j.mp, t.co and whatever URL which points to some other long URL will be expanded to their target. Not only short, we will also make redirected URLs to extract into their target URL. Get it? For instance, http://labs.google.com will be extracted into http://www.googlelabs.com/ as the former is redirected to the later.

How to?

How are we going to do that? We must understand a few basic little things to get it done. Heard of HTTP request and response? No? Servers work on request-response method like a Q&A session. You ask something, you get something. Same way you request something, and you get some response. Server responds (answers) you!

Request

Whatever you type on your browser’s address bar is a request. For example, if you type blog.sanspace.in in it’s a request to the server where this Sanspace blog is hosted.

Response

What you get back when you type the URL on the address bar is the response. i.e. After you typed blog.sanspace.in and pressed enter, the blog is shown to you. The entire page shown to you was the content sent as a part of response from the server.

In both the cases, you don’t see the whole thing. Not just blog.sanspace.in is the whole request. Some more information will also be sent along with the URL as request. Likewise, not just the page you get is the response, some other information will also be served with the contest as response.

If you don’t get these things, that won’t be a problem. Let’s go ahead further. You will understand it soon. The below code gets a URL from the user. Requests the server about this URL. After receiving the response, it displays the location property of the response header which is actually the long URL.

//Get response location of a given URL
function eurl($url){
//Get response headers
$response = get_headers($url, 1);
//Get the location property of the response header. If failure, show error
// condition?true:false
return ($response["Location"]?$response["Location"]:$url."<br /><b>No redirection for this URL!!</b>"); 
}

The above code is the simplest for our need. Also, it’s pretty self-explanatory and the comments help too. The live app works here. Try it. You will know what the above code can do.

The source for this app is available here.

Update: Aug 17 2012 00:19:52 Just got this from Command Line Magic ;-) So simple; silly me!!

flattr this!

Signal Handling in C – An overview of basics | C, Unix, Signal, Processing

I have read this topic recently and here I’ve tried to summarize some basics of this subject.

What is a signal?

Wikipedia defines a signal as below:

A signal is a limited form of inter-process communication used in Unix, Unix-like, and other POSIX-compliant operating systems.

If you have used UNIX for sometime, you should have tried at least once the key combination Ctrl + C. Have you? If yes, you have already used a signal. To be exact, you have sent an interrupt signal (SIGINT).

Likewise there is a lot of signals exists. Every signal means something. Checkout the signal list.

Signal Image

What is signal handling?

We all have faced the traffic signals. How do we handle it? Green? we go ahead. Yellow? we slow down. Red? we stop. Same way how a program (process) handles the signals it receives is referred to as signal handling.

How to handle signals?

How you make your program handle the signals? Back to the same example. How did we learn to handle signals? Somebody, probably our parents taught us. Similarly, you need to tell your program what should it do when it receives a signal. That’s what signal handling is all about.

How to program signal handling?

The Signal

We need to know what signal we’re handling. That’s the first thing. Before that we need a handler.

The handler

The handler is the main thing we need to handle signals. It gets the signal and performs whatever we want it to do with the signal.

The connection

How to connect the signal with the handler? We need to let the program know which handler it should use when it gets a signal. We can have different handlers for different signals. The process of setting up the handler for a signal is signal set up.

signal function

The signal function connects the signal with the handler. Below example shows a very basic signal handling.

#include<stdio.h>
#include<signal.h>
void sigHandlerFunc(int sig);
int main(){
	signal(SIGUSR1, sigHandlerFunc);
	raise(SIGUSR1);
	return 0;
}

void sigHandlerFunc(int sig){
	printf("Hey there! Just got this signal %d",sig);
}

flattr this!

Creating an SMS app using txtWeb.com | SMS, PHP, API

This is going to be a short post. Just a test app using txtWeb.com

txtWeb.com Logo

Have a look at the app before proceeding further for better understanding.

Kural-app-avatar

Register at txtWeb.com. Under apps, register a new keyword as shown below.

Screenshot of Registering new app on txtWeb

Now, you will also get an application key from this screen. This is required for proceeding further. We will be using it in the header of our application page. Have a look at the application code below:

<html>
    <head>
        <meta name="txtweb-appkey" content="557a202f-319b-4edc-9fb1" />
        <title>Kural - Get Kural Application</title>
    </head>
    <body>
		<?php
			$no = rand(1, 1330); // initializing to some random number
			//extracting the parameter "txtweb-message" from the http request sent by txtWeb
			if(isset($_GET['txtweb-message'])) {
				$no = $_GET['txtweb-message']; // Getting number from the user
			}
			include_once "../lib/getkural.php"; //include for getting Kural
			$arr = getKuralByNo($no); // Getting kural from Kural API
			echo $arr["line1"]."<br />".$arr["line2"];
			echo "<br />".$arr["trans"];
			echo "<br />#".$arr["no"];
		?>
    </body>
</html>

Note that we have used the application ID here. txtweb-message is the message sent by the user. We are getting the number from this and replying back using getKuralByNo() which retrieves the Kural from Kural API.

function getKuralByNo($no){	
	$url="http://getthirukural.appspot.com/api/2.0/kural/".$no."?appid=demoid&format=json";
	include_once "curlGet.php"; // include for curlGet()
	$json = curlGet($url);		
	if ($json){
		$arr =json_decode($json,true);			
		$arrKural = array("no" => $arr["KuralSet"]["Kural"][0]["Number"],
						"line1" => $arr["KuralSet"]["Kural"][0]["Line1"],
						"line2" => $arr["KuralSet"]["Kural"][0]["Line2"],
						"trans" => stripslashes($arr["KuralSet"]["Kural"][0]["Translation"]));
		return $arrKural;
	}
}

Now, txtWeb will request this app page whenever a user sends an SMS. The message will be sent along with the request. That’s exactly what we get from $_GET['txtweb-message'] and using it to call getKuralByNo(). So, our app page will return the Kural and it’s explanation. Whatever our page returns will be sent back to the user who sent the SMS. That’s it. We’re done with the application. Note: There is no validation or exception handling yet as it’s just a test app. Do include those if you’re creating an app.

Let’s test it. From your mobile, send an SMS as below:

Type @kural <no> (You can replace with anything ranging from 1 to 1330)
Send it to 092433 42000

You will receive a Kural and Explanation as an SMS. That’s all folks.

Follow the twitter account @twKural if you wish to receive a Kural daily.

flattr this!

Creating a Twitter Bot with Minimal code | Cron, cPanel, Twitter API, getThirukural API

This time, we are going to create a twitter bot which tweets a Kural daily. Sounds familiar? Yes, we have done half of this in the previous post. Retrieving a Kural from getThirukural API was demonstrated in the previous post. Now, we are just going to post that Kural to Twitter via twitter API.

Here goes the tasks:

  1. Get the Kural
  2. Tweet it
  3. Repeat 1 and 2 daily!

Task 1 could be accomplished easily as we did it already.

Task 2 is new to us. Interacting with twitter API. In this scenario, posting a tweet using twitter API. This needs the below items.

  1. A twitter account
  2. This is the account, from which our bot will be tweeting. Just sign up at twitter and get a short username. Then

  3. A registered twitter application
  4. Go to twitter and register an application. Then get the keys available under your application details.

    • OAuth Key
    • OAuth Secret
    • Consumer Key
    • Consumer Secret

Once we got these things ready, our next task is to get the script ready which will connect with the twitter API and tweet. We will be using a library created by Abraham. Download the two files OAuth.php and twitteroauth.php and from github. Now, our own script to get it done.

//getting JSON from API
$url = "http://getthirukural.appspot.com/api/2.0/kural/rnd?appid=demoid&format=json";
include_once "../lib/curlGet.php"; //include for curl_get()
$json = curl_get($url);
$message1=0;
$message2=0;
//parsing JSON into variables
if ($json){
	$arr =json_decode($json,true);
	$kuralNo = $arr["KuralSet"]["Kural"][0]["Number"];
	$l1 = $arr["KuralSet"]["Kural"][0]["Line1"];
	$l2 = $arr["KuralSet"]["Kural"][0]["Line2"];
	$transln =$arr["KuralSet"]["Kural"][0]["Translation"];
	//preparing tweets
	if ($l1 && $l2 && $transln){
		$message1 = $l1." ".$l2." #".$kuralNo;
		$message2 = $transln." #".$kuralNo;
	}
}
//tweet it up
if ( $message1 && $message2 ){
	require_once('../lib/twitteroauth.php'); //include for oauth library
	define("CONSUMER_KEY", "l6EesXGsanx9ssanTtBg");
	define("CONSUMER_SECRET", "pvgY13YN8rq0jsand5KjmQisanE6IRNl4EqkvklU");
	define("OAUTH_TOKEN", "261173393-rVrzykp1EkTfaWfvisanyK5AoJcHSzRsanquln4g");
	define("OAUTH_SECRET", "pV44WyXZZdT5OvSt0WMBsan9jb34iPGlsanTxBAdRO8");
	$connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN, OAUTH_SECRET);
	$connection->get('account/verify_credentials');					
	//tweet 1 - the Kural
	$connection->post('statuses/update', array('status' => $message1));
	//tweet 2 - the translation
	$connection->post('statuses/update', array('status' => $message2));
}

We are done with the second task. If you run the above script it will tweet the Kural and translation. Now the pending thing is to make it happen everyday.

We will be using cron to make this script run daily. I scheduled a cronjob on cPanel as below.

0 23 * * *

This cron job will call our script everyday. That’s all. Our bot is ready.

Follow twKural at Twitter

Our latest tweet is..

Follow twKural at Twitter

flattr this!

A simple API request on GetThirukural API | JSON, PHP and jQuery

In this post, I have tried simple API request using PHP. This might help beginners to understand a few basic functionalities.

Accessing Kural API is so simple that even I did it. It provides you clear information and all you have to do is to just get the JSON file and do whatever you want to do with it. I have just stopped with displaying it on the screen.

First, let’s see what are all the options available at kural. As usual, this API also gives you two options of format, XML and JSON. I have my eye on JSON. Next to format, you can request Kurals in three ways.

  • A specific Kural using its ID (number)
  • Any random Kural
  • Kurals ranging from X to Y. e.g. 1-10, 5-7 and etc.

Now, the technologies available to work with this API.

  • jQuery
    • getJSON
  • PHP
    • json_decode
    • PEAR’s Services_JSON

jQuery example on the kural site is pretty neat. Not much work, but better result. Code from kural site:

$.getJSON("http://getthirukural.appspot.com/api/1.0/kural/rnd?appid=demoid&format=json&jsoncallback=?", function(data){
$.each(data.KuralSet.Kural, function(i,Kural){
$('#load-kural').html("#"+Kural.Number+"<br>"+Kural.Line1+"<br>"+Kural.Line2+"<br>"+Kural.Translation);
});
});

<div id="load-kural"></div> 

I thought of doing something on my own. So, I searched options available on PHP to interpret JSON. I avoided the XML since I wanted to know about JSON parsing. The below list is what we are going to do.

  1. Get the JSON file from Kural API
  2. Parse the JSON contents
  3. Display the contents

First, we have to get the file from the API. We can get it using the URLs provided at the API site. Let’s get a random kural i.e. a JSON containing a single kural. Kural API’s JSON structure is like this: JSON->KuralSet->Kural->Number, Line1, Line2, Translation. Kural is an array containing the Kural no, line1, line2 and the translation of the Kural.

There were two options I got to get the file. First file_get_contents and the other was cURL option. Former worked better when I used it on my local host, but on my server if failed. So, I had to use cURL and it was hard for me to understand cURL session initiation and blah blah. I just simply used this snippet below which was handy. I don’t remember the source now. I have to credit the author once I’ve got it.

function curlGet($url) {
$ch=curl_init();
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_VERBOSE,1);
curl_setopt($ch,CURLOPT_FOLLOWLOCATION,1);
curl_setopt($ch,CURLOPT_URL,$url);
$result=curl_exec($ch);
curl_close($ch);
return $result;
}

flattr this!

Manmadhan Ambu – Not exactly a review

நேற்றிரவு மன்மதன் அம்பு இரவு காட்சி இனிதே முடிவடைந்தது. ஒவ்வொரு படம் பார்க்கும் போதும் எதாவது எழுதத் தோன்றும், நடக்காது. சரி இந்த வாட்டி ஒரு கை பாத்துடலாம்னு இதோ முடிஞ்சிருச்சு. கோவையில செந்தில் தியேட்டருக்கு ஒரு நல்ல பேர் இருக்கு. அங்க தான் ரொம்ப நாள் கழிச்சி திரைக்கு கிட்டக்க உக்காந்து படம் பார்த்தோம்.

படம் ஆரம்பத்துல ஒரு பத்து நிமிஷம் சாதாரணமா ஆரம்பிச்சது. கமல் வந்தப்புறம் கூட ரொம்பவெல்லாம் சூடு பிடிக்கலை. ‘காலடியில் உலகம்’, முட்டுசந்துல நின்னுட்டு தெளிவான வழி தெரிதுன்னு சொல்றதுன்னு அங்கங்க ரசிக்க வச்சாலும். ரெண்டாவது பாதி அளவுக்கு வரலை. மாதவன் தான் கலக்கறாரு. நெஜமாத்தாங்க, சந்தேகப்பிராணியா இருந்த்தாலும், அந்த வேலையில பட்டையக்கெளப்புறாரு. தண்ணியடிக்கற காட்சி, மேரே பாஸ் மா ஹே, பின்றாருப்பா.

இடைவேளை முடிஞ்சதும் கிட்டத்தட்ட க்ளைமேக்ஸ் வரை ஒரே சிரிப்புத்தான். சங்கீதாவும் அந்த குருப் குரூப்பும் பின்றாங்க. தேவி இசை காட்சிகளோட இணைஞ்சு ரசிக்க வைக்குது. நீல வானம் நல்ல காட்சியமைப்பு, பாடல் வரிகள். ரவி எப்படித்தான் எல்ல ரகத்துலயும் படம் எடுக்குறாருன்னு தெரியல. பாட்டாளி, பாறைன்னு படம் பண்ணவரா இவரு??

கமல்ஜி, உங்களான்ட இருந்து நாங்க நெறைய எதிர்பாக்கறோம். உன்னைபோல் ஒருவன், மன்மதன் அம்பு எல்லாம் சிறந்த படங்கள்னாலும் அதையும் தாண்டி உங்ககிட்ட எதிர்பார்ப்ப நீங்க உருவாக்கிட்டீங்க, பாத்துக்கங்க.

த்ரிஷாவுக்கு ரொம்ப நாள் கழிச்சி ஒரு புத்திசாலியான பொண்ணா திரையில தெரியற வாய்ப்பு, பயன்படுத்திகிட்டிருக்காங்க.

எனக்கு புடிச்ச சில விமர்சனங்கள்:

பஞ்சதந்திரம் அளவுக்கு இல்லன்னு சொல்றத விட்டுட்டு சும்மா போய் பாத்து சிரிச்சிட்டு வாங்க. ரொம்ப நாள் கழிச்சி கொடுத்த காசுக்கு உருப்படியா ஒரு படம் காட்டுனாங்க. கடைசியா எதானும் சொல்லனும்கறதுக்காக..

Honesty Luxury ஆகவே இருந்துட்டு போகட்டும். அந்த Luxury ஆவது நம்மளான்ட இருக்கட்டும். ( படம் முழுக்க Blackberry, iPhone காமிச்சு கடுப்பேத்திட்டாங்கப்பா.. )

flattr this!