Monday, August 27, 2018

Where is that server?

I was tying to find the geo location of a few DNS IPs that we use for our proxy server today, and stumbled upon these two sites: https://tools.keycdn.com/geo and https://ipstack.com

So just out of curiosity I decided to check some Indian (and India related) sites and their geo locations. The most surprising one for me was Paytm and Flipkart. While the later seem to be hosted in India, Paytm seem to be hosted on AWS in Singapore. Of the banks: the HDFC seem to be at odds to use Cloudflare hosting from USA. With GOI (hopefully soon) enacting the DEPA (Data Empowerment and Protection Architecture), many of these sites would need to move with in India, this includes some which I am directly associated with. Also despite what some Chinese mobile companies want you to believe that their user data is stored only in India / Singapore (which may be true), their IP address location seem to indicate a different story. One more fun: google.co.in points to a server in USA.

Here are more:



























Sunday, August 26, 2018

A quick trip to my alma mater: Pune University

I had a quick visit to my alma mater the PU (now SPPU). I had been visiting the campus after quite a long gap, and more so I was going in monsoon time. A time when the campus is lush green and as always pleasant than the rest of city. So I was surprised to find that the British era main building has been mostly renovated and looked great again.


And one who has been in PU for even a short graduation course cannot forget the great old tree - the place where multiple generation of students relaxed, studied, chilled and romanced. I always make it a point to walk though this tree every time I visit the PU campus. The tree is getting old, a branch was completely rotten, but it still stands still as a magnificent caretaker of numerous seekers of knowledge who folk to this Oxford of the east. 






Saturday, August 25, 2018

Odd tech trials #1

I use a lot of tech. And more often that not, I use them in ways that a typical user may not really use the tech. Nor it is the way tech companies design their products to be used. Here are some odd experiments I am currently running:

1) Using a Windows desktop PC with out a mouse. I have almost a decade old machine, of which internals have been upgraded over time. About 5 years ago (around the time Windows 8 was released), I had also bought a touch screen monitor, but seldom use the "touch" part of it. Few weeks ago, I decided to give up on my old wired mouse and keyboard in leu of a wireless keyboard I had got for an iPad (but which I found no use of). Now, I didn't have wireless mouse, so I decided to just rely on using the touch screen. So far the experiment has been going great. But these days, I use my desktop PC a lot less as I am away from home most of the time. (PS: this post is being written on my desktop PC).

2) Using an Android phone without Google account. I have been using Oppo A3s as my secondary phone for a while now. A week ago, however, I was getting increasingly upset over the way Google was tracking my movements. Yes, I could shutoff the tracking options, but then I decided to nuke the phone and use it without any Google account. But without a Google account on the phone there is no way to download apps from the Play Store. Being an Oppo phone, however, there was a "Oppo Store" where you could install most of the popular apps on the "Play Store", and the Oppo store didn't need you to setup an account or anything, but it indicated that it would constantly send "usage information" to "improve user experience". Something I have ZERO interest in, and of-course something that raises my inner eyebrows. So I nuked the phone again and set it up with out anything, disabling and removing everything possible to the bare minimum. But I needed two apps (one a popular messaging app and another an apartment security app) - for both of these I found a trusted source to get the APK and side load them on to the phone. As of now things are going fine. I don't recommend people doing this, but one can pretty much have a locked down phone (android), if one really wants it that way.

3) Using an Android phone just to take photos. I had been reviewing the so called phone called "Kodak Ektra". Now even though you can make phone calls using this "phone", it is very much a camera in all ergonomics sense. I tried to use it as my secondary phone but I just couldn't. So next I nuked this phone and removed all the apps (including phone and messages) except the camera app, the Google Photos app and Snapseed. Now I have a "smart camera" - that has only one function : take photos, on device editing (using the fantastic Snapseed), and backup to Google Photos over a Wi-Fi connection. Coming to think of it sounds pretty cool. I am just wondering, why have companies not yet come up with camera products that would offer this very limited and focused function: take photos, offer on device editing, and backup on cloud storage. I definitely think there is a sizeable marker for this device - and I already have a neat prototype of this ;)

Friday, August 24, 2018

Playing with OpenCV : dynamic magnification



I have been playing around with OpenCV (https://docs.opencv.org/3.3.0/index.html) on Android quite a bit these days. While I am using OpenCV for image processing and image recognition work (using caffe bridge), I wanted to implement a UX scenario with dynamic magnification when a user is interacting with an image. An example of this UX is seen below:



You can see this kind of UX on iOS when trying to select a text. This is the equivalent for selecting a part of the image. After trying out a bit, I figured we could do with some simple lines of code as follows:

  Mat img = ...; // the source image is read into this Mat
 int sz = .. ; // size of zoomed area in pixels
 int sf = 2 ;  // the scale factor
 int scaledFactor = sz * sf; // this would be the total width of the zoomed area            
  int mxX =  mnX + scaledFactor;
 int mxY =  mnY + scaledFactor;

 // the centre of the circle depicting the zoom
 Point circleCenter = new Point((mxX-mnX)/2, (mxY-mnY)/2);

 // the source rectangle 
 Rect rect = new Rect(mnHX, mnHY, sz, sz);
 // the scaled rectangle 
 Rect scaledRect = new Rect(mnX, mnY, scaledFactor, scaledFactor);
 Mat roiFrame = new Mat(img, rect);  // get the are of interest to magnify 
 Mat scaledFrame = new Mat(roiFrame.rows()*sf, roiFrame.cols()*sf, img.type());
 Mat circleMask = new Mat(roiFrame.rows()*sf, roiFrame.cols()*sf, img.type());
 // create a circular mask 
 rectangle(circleMask, new Point(0, 0), new Point(scaledFactor, scaledFactor), BLACK, -1);
 circle(circleMask, circleCenter, sz, WHITE, -1);
 // zoom the rectangular area
 resize(roiFrame, scaledFrame, new Size(), sf, sf, INTER_LINEAR);
 // now copy the zoomed area into the source image applying the circular mask 
 scaledFrame.copyTo(img.submat(scaledRect), circleMask);
 // indicative circle
 circle(img, new Point((mxX+mnX)/2, (mxY+mnY)/2), sz, GREEN, 1);

 // cleanup ..
 roiFrame.release();
 scaledFrame.release();
 circleMask.release();

The above code is essentially a fast way to get a source rectangular area on the screen where a user is interacting, scale that area to a factor as needed, then apply a circular mask to give an effect of magnifying glass, then copy this back on to the source image so that it overlays to give a smooth UX with dynamic magnification when selecting a portion of the image.

Friday, March 02, 2018

Axiostat becomes the first from India to get USFDA approval for wound dressing.

Wounds are a traumatic. Some wounds may immediately kill a person, others may hurt over a longer period. Having been associated with a company researching into this domain, I can tell you that for both of these, wound care is absolutely essential and an important part of that is dressing.

Axiostat (http://www.axiobio.com), a Bangalore based company just got USFDA approval for their patented (http://www.axiobio.com/axio-clotting-technology/), emergency wound dressing tech ( 
http://www.newindianexpress.com/states/karnataka/2018/mar/01/axiostat-is-first-indian-wound-dressing-to-get-usfda-nod-1780418.html). Apparently, their products are being already used in Indian military (http://www.axiobio.com/military/). Now, with this approval, they also have the possibility of a larger global market.

This is quite a good news. These are some of the companies and alike that GOI needs to froster and encourage. They are not the ones that have short term benefits (of creating lots of jobs, say) albeit  have long term impact not only in India but have global outreach. Something, I had argued back in 2014 in this article (http://tovganesh.blogspot.in/2014/12/make-for-india-makes-more-sense-than.html) and also echoed in a well written article by Amit Paranjape (https://swarajyamag.com/science/israel-the-startup-nation-lessons-for-india)

As a policy, GOI not only needs to encourage local manufacturing, but would have to go the extra mile to encourage disruptive, forward thinking companies who may not have capital funds today but have superior brain power to make products, and IP for the world audience of tomorrow. 

Tuesday, December 12, 2017

A week with Apple Watch

So, here I am. After avoiding to get a watch in the first place, I recently bought an Apple Watch (Series 1) for myself. I didn't go for Series 3 because I am not really a swimmer or runner and the Series 3 doesn't actually offer substantially more in terms of features against the price difference. With the difference in price, you can actually purchase AirPods as well.

In 2013, months before Apple released their first Watch model - I had written a post stating why calls, text and tweets won't define a smartwatch (see. It is not calls, text and tweet that would make a smartwatch) - which I am pleased that I wrote - and I am so right in every aspect of what I wrote there. Apple with its Watch, initially had a mis-step. It tried to position itself as a luxury watch maker, failed, and quickly pivoted its strategy to what a wearable watch truly made sense: tell time, track fitness, have a quick way to call up digital assistant, and 3rd party apps to extend the functions not in the core system.  When I see the Siri watch face on my watch - I can't help but pat myself of how close is this interaction model to what I described in the article above :)



The interaction model that I proposed and the Siri Watch face have so much in common (see http://tovganesh.blogspot.in/2013/09/it-is-not-calls-text-and-tweet-that.html).



Third party apps are there, but are still a long way to go.

There is still a lot to improve until we really have a wearable computer that doesn't look like a piece of brick, and one whose battery lasts for at least a full day of heavy use. The Series 3 with LTE is definitely not that one device as Joanna from WSJ notes in her review of the latest iteration of the watch that I didn't get (https://www.wsj.com/articles/apple-watch-series-3-review-untethered-and-unreliable-1505905203).

For one thing is sure, smartwatches are here to stay. It is only to be seen if they take as much time as smartphones to evolve or would we see substantial breakthroughs in a much shorter period. 

Monday, November 20, 2017

Using the iPhone for programming




I have been using my iPhone like a computer for some time now. The primary thing I do with my computer is programming. I dislike laptops and more dislike carrying around one. Over about 2 months ago - I experimented using the iPad as my primary go to computer. With the multitasking enhancements introduced in iOS 11, I could pretty much use it as a primary computer with a number work apps installed: Terminus (for ssh to development Linux server), Pythonista (for a fantastic on device python interpreter with a number of libraries I use - numpy to be specific, Working Copy (for managing git repositories), Textastic (the most fantastic source code editor for iOS). With these apps in place my next quest was to see if I could manage even without the iPad around. This is my week 2 of the experiment and I think I haven’t faced a lot of issue for on the go programming. These tools just work great for me. Now I can pretty much keep my laptop at home and use the desktop at work, while on the move I just use my phone. There are a few things like join.me and teamviewer that may just work better on a bigger screen, but then I can also connect my phone using the lightning to vga dongle that I sometimes carry - if there is really this need. 

Oh - and did I tell you that I wrote this post on the same phone ;) 

Peace. 

Friday, August 25, 2017

Programming in Devanagari [Revisited]

Exactly a decade ago, I wrote this post - http://tovganesh.blogspot.in/2007/08/programming-in-and-for-devanagari.html. I was exploring JavaFX released by Sun Microsystems back then. I am no longer using JavaFX actively. But a decade later I am exploring Go. And the first code I wrote today morning was this:

package main

import "fmt"

func main() {
fmt.Println("ॐ नमो भगवते वासुदेवाय")
}

So just thought of reconnecting with a decade old post. Idea stays, the mode has changed. 

Tuesday, August 01, 2017

Simple script to extract final GAMESS geometry

Am dabbling with QM codes again, so I needed this quick script without much baggage of other dependencies, so wrote a quick one in Python. You can get this from Github: https://github.com/tovganesh/myrepo/blob/master/extractConvergedGeometry.py

I will call these scripts - quick and useful scrips (QUS) - hence forth and post others when I feel the need :)

Friday, June 30, 2017

Count number of lines for each PDF in a folder

This is just a note about a script which may be useful to you. This one calculates the number of lines per PDF and prints the final count.

import sys

import fnmatch
import os

matches = []
for root, dirnames, filenames in os.walk(sys.argv[1]):
   for filename in fnmatch.filter(filenames, '*.pdf'):
       matches.append(os.path.join(root, filename))

count = 0
for mat in matches:
   if not mat.lower().endswith("pdf"): continue
   cmd = "pdftk " + mat +  " dump_data | grep NumberOfPages > pn.log"
   os.system(cmd)
   try:
     f = open("pn.log")
     l = f.read().strip().split(":")[1].strip()
     f.close()
     print(mat + "," + l)
     count = int(l) + count
   except:
     continue

print(count)


Have a great weekend ! :)

Tuesday, June 06, 2017

On "The Computer's Common Sense"

Background
On the surface of it, this is a followup of blog "The Computer's Common Sense" [read here: https://rulasense.wordpress.com/2017/05/] by my friend AKD (https://twitter.com/alok_damle) who is passionate about building a new kind of intelligent system. This is also about my understanding of the machine learning tools that I have used in my work at VLife (which is now Novalead Pharma). These are the thoughts that are coming from a beginner to intermediate person with ML background, so this is more of a learning via conversation exercise for me, and more philosophically skewed rather than looking technically deep.

Artificial Intelligence vs Human Intelligence (commonly called common sense)
AKD starts of his blog with a title that makes you think a bit. It seems to equate Human Intelligence [W1] with common sense [W2]. To me however, common sense (of how uncommon it is), is one part of human intelligence, it is not the only form of intelligence that humans have. Further common sense, as the name suggests, is not something specific to an individual, but has evolved over time from a group of individuals, representing common knowledge - or to put it in other words it is "ensemble intelligence" rather than something that represents and individual humans. Thus, I feel that human intelligence is a combination of many factors - only one of which is common sense. The decisions that humans take is a cumulative effect of various factors.

RULA – Read Understand Learn Apply
If we get past that oversight, some of things being to make sense to me. The example of screw driver (https://rulasense.wordpress.com/2017/05/17/artificial-intelligence-vs-common-sense/) kind of makes sense for the current state of art on AI. It is mostly possible that no AI will suggest using your finger nails instead of screwdriver! *. But the reason for this is probably to do with other environmental factors that the human is in. The human brain, more often than not tries to correlate the present situation with the past situations it has encountered (when in isolation), or it tries to correlate with what others have discovered when being in similar situation (the common sense part). In isolation, a human brain probably works by "read (or observe) - understand - learn and apply" cycle, but that may not be the case always. The second term "understand" is kind of misnomer here - because one can short this with "read (or observe) - learn and apply", with "understanding" coming at a later stage - probably a far later stage. A lot of what we humans do probably translates to "read (or observe) - learn - apply". For instance, take any kid, he observers his parents, tries to learn from them, and then do similar things. He doesn't understand what he does till he grows up. Thus I feel, "understanding" comes after a series of reinforcement learning and application to what was observed. Evidently a lot of AI at the moment is focused on "read (or observer) - learn - apply" cycle and probably never come to the point of "understanding". Deep learning, may however be the ones that actually bring understanding to this process [W3].

Machine Learning vs Human Learning
That brings me to the next part of the blog, which is kind of generically titled. I think the core theme of this section is to bring home a point that most of the AI today is basically data driven. Human learning however can happen at a much superior pace and doesn't need as much data. This is quite true. But I think that this is rather possible because not only the human brain is one, but our brains are connected as a lot with other intelligent beings - and this collective brain power, which is essentially to a large extent what "common sense" encompasses - influences our individual brain learning capabilities. The "collective brain power" is not necessarily of humans, it would be be from any other form of intelligence behaviour - other animals, or even insects. Human brain is capable of capturing and basing its learning on information acquired by other intelligence forms. A counter point to the kids example above, is how often we find that the little ones think differently to what is previously conceived. That, I feel is because the kid's brain is kind of "disconnected" from the "collective brain power", that prompts the brain to potentially discover new ways to solve a problem - which an adult's brain just defaults to "common sense" part.

AI at the moment is limited to what humans feed it with. It doesn't have unrestricted access to the environment outside - as we humans have. Whether that is a shortcoming of current AI or if the AI as is implemented today needs fundamental rethink is what is yet to be seen. AKD thinks that there is an alternate way that is not yet explored. I await to see what is that.


NOTES:
* I am not sure how IBM Watson[R1] will respond - because Watson is a totally different take and at edge of AI research today, and that it could beat humans in the game of Jeopardy! is anything but amazing.

References:
R1) IBM Watson: https://www.ibm.com/watson/
R2) L. Deng, G. Tur, X. He, and D. Hakkani-Tur. "Use of Kernel Deep Convex Networks and End-To-End Learning for Spoken Language Understanding," Proc. IEEE Workshop on Spoken Language Technologies, 2012

Wikipedia:
W1) Human Intelligence https://en.wikipedia.org/wiki/Human_intelligence
W2) Common Sense https://en.wikipedia.org/wiki/Common_sense
W3) IBM Watson https://en.wikipedia.org/wiki/Watson_(computer)

Friday, June 02, 2017

Serval Project: Carrier independent network

Almost 5 years back, while putting in an my idea of building a mobile experience for myself, I had suggested a carrier independent network is what I want - something what will not only distribute the need to create infrastructure but also free us from lousy carrier plans and create a world where communication is free between humans where ever they are. [Ref: http://tovganesh.blogspot.in/2012/01/kosh-building-mobile-user-experience.html]. Obviously carrier based / satellite systems are necessary for emergency situations - but our reliance on them could definitely be minimised.

So when I saw the Serval Project (http://www.servalproject.org/), I was pleasantly surprised that they have exactly the same goal. More over, instead of building a whole new OS as I earlier proposed they are going for a more practical solution of putting it in an Android app. This is a big shout out to you guys developing the Serval Project. It is like a moment when you feel that you are not alone in the thoughts you have about how to make things different in this world. I have just installed the app on my secondary Android device and tested it with a friend. Though the interface is quite primitive at this stage, and the call quality not upto mark - it works. It is experimental and yes, it will improve.

After digging a bit into history of Serval Project (http://developer.servalproject.org/dokuwiki/doku.php?id=content:about), I discovered that it was proposed almost 2 years before I had written the above mentioned article and an early system was used for emergency response during Haiti Earthquake.

The Serval Project is also opensource (https://github.com/servalproject) and in the coming days I plan to explore this project more in depth to see if I can contribute in some way here.

Meanwhile any one should be able to install the app from Google Play (https://play.google.com/store/apps/details?id=org.servalproject), and be the part of experiment and the quest to build a carrier independent backup network. 

Saturday, April 01, 2017

This is how birds enjoy water ...





This is how much the sunbirds in my garden enjoy even with the little water you share with them this summer. They are pretty cool :)

Musings on Nature around Gandhkosh

Some of shots of nature taken around DSK Gandhkosh, my work home over a period of few months. Hope you like it :)

















Friday, February 24, 2017

A "Generic Component" in Angular 2

I am generally a lazy person. Especially so when it comes to writing some UI code (although I can have endless comments and critiques on someone else's UI and UX ;-)) . I find it repetitive. And any form of repetitive behaviour is ripe for automation. One such activity which I recently encountered in a project I was doing was creating forms - a lot of them - with Angular 2 front end. I remember in good old MS Access days these were just a click away - although I disliked it, this is what I did in my first paying job in summer break of my school.
I am not sure we have something equivalent here - more so when Angular is so rapidly changing and keeps breaking every other month.

However, it is rather easy to write a "generic component" that can be simply configured using a JSON and you can have a new form without writing any of the usual code. This has two main advantages: 1) You have a single place to fix when Angular changes something in its structure 2) You have a super reusable form engine which can be configured on the fly, allowing you to do cool things like storing form info in a backend service, and automatically update the Angular 2 app on the fly.

To start off, you will need to define a JSON which can be used to construct the UI on the fly.

this.componentJSON = {};
this.componentJSON['title'] = "Trials";

this.componentJSON['formItems'] = [
 { "type": "text", "id": "trial", "name": "Trial Name", "description": "Name of trial", "theValue": "", "param_name": "trial_name" },
 { "type": "text", "id": "sponsorer", "name": "Sponsorer", "description": "Sponsorer of the trial", "theValue": "", "param_name": "sponsorer_name" },
 { "type": "button", "id": "submit", "name": "Submit", "description": "Submit the form", "api_call": this.createTrial },
];


The above JSON is intended to create a simple form with a title, two text fields and a button. The api_call parameter of the button is a Service object used to make an API call.  This JSON should be used in any parent component that intends to use this component, typically being initialised in ngOnInit() method.

Next we define generic.component.ts and generic.component.html as follows

import { Component, OnInit, Input } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { CommonModule } from '@angular/common';
import { BrowserModule } from '@angular/platform-browser';

@Component({
   moduleId: module.id,
   selector: 'generic-cmp',
   templateUrl: 'generic.component.html'
})
export class GenericComponent implements OnInit {

   @Input()
   componentJSON: any;

   constructor() { }

   ngOnInit() {
   }

   callAPI(item: any) {
     item.api_call.api_call(this.processInput(this.componentJSON)).subscribe((res:any) => {      
if (res.status == 0) {
        alert(res.result.message);  
} else {
        alert(res.error.error_message);  
}
     });
   }

   processInput(componentJSON: any) {
     var formItems = componentJSON['formItems'];

     var params: any;
     params = {};
     for(var frmItm in formItems) {
if (formItems[frmItm].type != 'button') {
        params[formItems[frmItm].param_name] = formItems[frmItm].theValue;
}
     }

     return params;
   }
}













The trick as always is to generalise the JSON and then the generic component generator above to take care of different input forms as well as adding validations. The callAPI function above for instance, basically generalises an API call, where as the processInput method creates the parameter payload for API call from the JSON we created earlier. Advantage again being that simply changing the JSON pretty much re-creates the whole of the HTML. Creating a different form just requires one to define a new JSON.

Since this component needs to be used in multiple places it would be wise to declare the directives associated with this component in a shared.module.ts file:

import { NgModule, ModuleWithProviders } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule } from '@angular/forms';
import { RouterModule } from '@angular/router';
import { BrowserModule } from '@angular/platform-browser';

import { NameListService } from './name-list/index';
import { GenericComponent } from './generic-component/generic.component';

@NgModule({
    imports: [CommonModule, RouterModule, FormsModule],
    declarations: [GenericComponent],
    exports: [CommonModule, FormsModule, RouterModule, BrowserModule, GenericComponent],    
})

export class SharedModule {
    static forRoot(): ModuleWithProviders {
        return {
            ngModule: SharedModule,
            providers: [NameListService]
        };
    }
}


Now a third component can embed this reusable, configurable component as using:



Thats it! You should have a fairly reusable forms module, that you can fully configure using the JSON, without the need to keep writing the HTML and related Component code every time. 

A similar pattern may also be used for creating a generic service call. This is again useful as you code can remain independent of the changes that may occur in basic underlying syntax of say actually calling the HTTP post method. 

Wednesday, January 04, 2017

My tech wish list - 2017

1. MacBook with inbuilt cellular connectivity
2. iPhone with no ports - not even lightning - with wireless charging. For a truly courageous furture.
3. If I can just use my phone for every thing - where is that elusive Surface Phone ?
4. On demand apps on iOS - so that I can save on my precious local storage
5. Driver less cars - where are they ?
6. Super personalised medicines
7. AI assistant for myself - without a cloud connected device
8. Battery that lasts for 1 month on 15 min charge and doesn't explode. 

Saturday, October 15, 2016

A memory corruption bug as result of 'single character'

It has been a quite a few months since I saw or have written some serious algorithmic code involving C++ (read computing surfaces or doing numerical computation). So when a weird bug was reported for a program I work with, I though it must be fun. The case was for a particular feature in the program that would plot a surface based on some input parameters. The bug was strange because it made the program crash on Windows but on Linux it plotted the surface correctly.

Good then, let us fire gdb and figure out where it was faulting. But for some reason, on the build system I was using for Windows, I couldn't get gdb to work properly. So was left with old way: read the code and put a lot of printf(). Now since this was Windows, printf() wouldn't work either! The program however had a logging API, that would log text to a window. The problem however was that there was no text in the window (or the window was not refreshed) just before the program used to crash. So the next technique was to use the logging and commenting one line at a time, with a premature return from the affected function, which is usually very tricky to do when there are nested loops. And this was exactly the case here. Finally, the real culprit was one line that read:

if (k+l < numberOfXPoints) {
 ....
 x[i][j][k+l] = ...
 ..
}

There you have it. The code ought to be:

if (k+l < numberOfZPoints) {
 ....
 x[i][j][k+l] = ...
 ..
}

Apparently this error was never observed, and looking back at the history of the source code, I found that it was this way ever since it was written, a couple of years ago! This error didn't probably produce any visible output errors and apparently went past all the test cases as well, because for none of those, the points along z-direction exceeded the points along x-direction.

Two things to learn:
1) Never name two variables such that they differ in only one character. If you indeed need to then ideally have the differing character towards the beginning of the word as in the case above a better way to name would have been using: xPts and yPts.
2) If an error is occurring on one platform but not on another, it is most likely a memory corruption issue. Hunt down all the code dealing with arrays. 


Thursday, May 12, 2016

Experiments with super capacitors and solar panel



Towards the end of 2012, I started experimenting with powering devices directly using solar panel and an pretty expensive super capacitor. For this I bought a couple of solar calculators from a nearby mall, disassembled the tiny solar panels and tried to run a small mp3 player using the same. The project kind of got shelved, towards the beginning of 2013 due to loss of my mother.

Later on in 2014, I got some pretty inexpensive but good quality super capacitors from eBay. And then repeated the experiments. I tried two things:
1) Powering the calculator fully on solar and super capacitor
2) Powering an MP3 player using a super capacitor and a slightly larger solar panel (again ripped off from a solar power bank)




While the fully solar calculator (supported by the super capacitor) works wonderfully well (even today), the MP3 did work pretty well too. The only issue with the MP3 player being that the 1F super capacitor I had only lasted for about a minute if I got it away from direct sun light. The calculator on the other hand, requiring pretty low power didn't have issue with even ambient room light.

I believe super capacitors combined with solar panels have interesting applications. Especially in wearable world. It is only a matter of time that these things become common place.