Wednesday, December 31, 2014

The Phoenix Project (Book Review and Lessons Learned)

If you're a professional developer, or just a developer, you should read this book, and you will probably enjoy it very much. It should also make you a better developer.

If you're a IT manager, you should REALLY read this book... because if you don't understand these concepts and implement them, you will probably have an inefficient project and/or team:  slow, buggy, and complex processes; unmanageable and unscalable code.

This will lead you to spend the majority of your time dealing with production issues... which means less time to think about coding... which leads to lower quality, untested code... which leads to more issues which... ultimately causes you and your team to not getting enough sleep because you're always on issue calls at 3AM in the morning... which ... need I continue?

I think they call this the un-Virtuous Cycle.

For me after reading this, clearly my point of pain was in the lack of testability in the existing code that I inherited. Without tests, you weren't sure if even a minor change would have dire consequences.. until it was release and complaints start coming in.

Because the code already exists though but testability was not something that was considered at all, I couldn't directly integrate a testing framework like NUnit; would have had to rewrite some core parts which is just too risky...

Which leads to another consideration when coding and also why you should have unit tests:

As more code relies on a single function, it becomes harder to change... unless you have unit tests that cover all possible existing uses, in which case it should be very easy to make sure existing functionality isn't broken.

The other thing about testing, and also automating the build process is that it shortens release cycles. These activities usually have a  relatively fixed cost, it doesn't matter whether the change is big or small. So reducing the these times makes it easier to do a release, so you can do more smaller releases instead of doing huge releases that contain multiple changes.

You can't fight human nature: we want to get things done the quickest and least painful way possible. Automation let's you do that while not impacting quality.

Yes this is kind of getting off-topic from the book itself but I feel, at least for me and with my own experiences, that it opens the door to all these ideas.

There is a saying that lazy programmers are the best programmers. It probably should be lazy but smart programmers. Understanding and implementing the ideas and in The Phoenix Project should make you one of the latter.

Amazon The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win

Sunday, December 28, 2014

Hello

Just realized in haven't introduced myself yet...

I am a self-taught programmer, been doing it since I was 7 and now I am in my mid-20s so that's like 15+ years of experience... kind of, programming as a 7 year old is not the same as on-the-job experience. But then again there are lot's of things that you don't pick up from school or the job like good coding habits and how to approach new problems or learn new skills, the latter is probably a very good skill to have.

Anyway, this blog will mostly be on Tech and may be some what technical, but that's pretty much because most of what I'm interested is technology/coding related. Occasionally I will write about other things, pretty much whatever comes to mind.

This is actually a really old blog, resurrected several times and then dying again... I actually started a few new blogs that were fairly narrow in nature, but in the end, wanted to be able to write about many different things and didn't feel like managing 3 separate sites... so I am back here again (because the name is most fitting).


One purpose of the other blog was to share some applications I wrote so they would potentially not just end up collecting digital dust on my hard drive or in GitHub,

Some of the programs aren't too polished but it'll get there when I have some time and interest to spare. I tend to code in bursts and whenever I think of something new to add and get around to adding it.

Feel free to drop me a comment either here or on GitHub about what you think, bugs, or anything else.

Oh and here's a brief CV:

Things I know well:

Windows, C# and WPF; Java; PHP, SQL, Web Servers, JavaScript, JSON, XML, HTML/CSS, process automation

...And the long tail, things I've dabbled in:

Linux, node.js, Backbone, Android, Scala, Clojure, WinDbg, reverse engineering/porting (ILSpy), Ant/Nant, Moq, NUnit

...and trying to figure out: Investing

Configuring subdomains on WAMP

I just spent 3+ hours trying to set up my DEV environment on a new machine... can't remember how I did it before other than installing each component one by one.

Started with that but for some reason they wouldn't play nice so after 2 hours... I uninstalled everything and just downloaded WAMP. Quick and easy... sort of.

After spending another 30 minutes trying to setup an instance of Yii, which apparently also changed a lot since my last use...

Finally, got to setting up the subdomain because I don't like typing the super-long URL needed to access the front page. Also the application should really be run from a root address anyway... I remember things possibly screw up when migrating to a Production server, otherwise.

http://blog.smriyaz.com/how-to-create-virtual-hosts-in-windows-wamp-server/

This site has a pretty good walk-through except:


- As of Windows 8, I don'r remember having this issue in Windows 7, the host file is completely locked, even elevating Notepad++ to Admin cannot save it. You need to save it to a new path and then manually copy it back into the folder


- This code is not exactly correct and it took me a while to realize it.

The highlighted text should be mydomain.localhost

NameVirtualHost 127.0.0.1  

<VirtualHost 127.0.0.1>
DocumentRoot "c:/wamp/www"
ServerName localhost
</VirtualHost>

<VirtualHost 127.0.0.1>
DocumentRoot "c:/wamp/www/mydomain.local"
ServerName mydomain.local
</VirtualHost>

Saturday, December 27, 2014

Money: Master The Game

Just finished reading this last week, so writing this review while it's still fresh... at least my immediate opinions.

The book overall is pretty detailed and informative. It gets you thinking and asking questions and motivates to you take action. It is also very interesting as it is based on interviews with some very, very, famous people. Just for that, you may want to read it from cover-to-cover.

But at the same time, the real meat of the book, the real useful information, without the background story and details, probably only make up less than 10% of the book.

Tony Robbins, as usual, takes us on a journey and well in book format... it is fairly long. Probably took me in total 20 hours of reading, over 3 weeks. He covers not only the financial information but also develops the mindset on how to approach investing and create wealth (holistically, not just money-wise).Am

This is probably a book you want to finish as soon as possible:

1. Get your money set up and working ASAP
2. Keep your motivation alive, because trust me it dies over time, and if it dies, you will never finish reading it.

Also, if you have a Finance background or have been reading other investing books like the Intelligent Investor (that's me...) even less.

Personally, other than the actual asset allocations, I found some specific points a good review (why diversify, correlations, buy and hold strategy) and somewhat interesting (the interviews) and afterwards, I seem to be more motivated to take action than before... but then again maybe it's due to the significant time investment... it would be a waste not to.

One other thing is that he bolds a lot of key concepts so it's possible you can skim the book (easier to do digitally) and pick out the main points and sections that apply to you (I did that for some chapters mostly either because I already knew about the topic or it didn't seem relevant to me personally).

At times it does seem like he is trying to sell you the services that he introduces. They themselves look somewhere from OK to pretty good, but after he reintroduces them over and over again throughout the book... I kind feel like I am being market-ed to and his goal is to get me to buy something...

So in summary, overall a good book but you need to have some serious free-time and commitment to finish it all but at the same time it makes it feel like you've made an investment and more likely to spend more effort in taking action. Also need to consciously, overlook the marketing tone and evaluate the services yourself.

Amazon

GoodReads

Why You Should Learn Different Programming Languages

Up until I graduated college, I mostly used Microsoft languages like C# and VB. Other than the .NET switch from VB6, I pretty much wrote code the Microsoft way and whatever way that I knew how... which in hindsight looks like a mess.

When you start working in a team and need to try to understand other people's messy code and/or develop on large applications where you cannot remember everything you did several months back... but need to, Clean, simple, robust, unit-testable and sufficiently documented code will save you a lot of time and pain...

I can't actually say you will learn this... I know a lot of people that don't and they are always busy putting out fires in a production environment... which to me sounds boring and stressful. (I have a whole rant on this which I will leave for another day...)

So anyway... the thing about learning or dabbling in many languages is that it exposes you to many different paradigms and conventions which make up new concepts, and different approaches to programming (event-driven, MVC, functional, ...). When you go back to your core language, you tend to pick out the ones you really like and start actively looking to see how to implement them in the language, because concepts tend to be transferrable. Often times you will find that there is some existing or similar framework already but without stepping out of you comfort zone, you would've never come accross it.

In addition, you will be more comfortable with learning other languages and tools, you get use to rolling up your sleeves, figuring things out, and thinking about how to make things better, faster... automated.

And it all can culminate in many Aha! moments. I have had many problems where I could not come up with a permanent fix or solution immediately but after a few weeks or months and after learning some new things, revisiting the problem, I go... What if we do this...

So here are some things I've learnt from other languages and brought back to C#.

Anonymous Functions

Language: JavaScript, Java

In Java you can implement interfaces as an anonymous class, and in JavaScript you can pass functions as parameters.

At the time, I kind of knew about events, delegates and LINQ in C#, but never really used it. But more and more, I started seeing instances where it would be great to pass functions.

Without the exposure to the concept, I probably would have never touched these things in C#.

Interfaces, Dependency Injection, and Design Patterns

Language: JavaScript, Java

First time I used this was on the job on a small project, via Spring. Then when I started using JavaScript frameworks like jQuery and node.js which have very modular components and rely on configurations to instantiate concrete instances of interfaces.

To be honest, up until then, I didn't really understand interfaces and inheritance; never really used it.

But this kind of "opened the door" and one thing led to another:

Why do I want to use dependency injection (IOC)?
  • Easier to swap out
  • Flexible
  • Forces you to think about and remove hard coded dependencies
  • Components can be reused in different projects... less coding, less testing

How do I make components? 
Use interfaces and abstract classes; understand inheritance

How should I build these modules?
  • Use design patterns when possible
  • Keep them small and as independent as possible
  • Separation-of-concerns, software architecture
What are design patterns, software architecture?
...

Frameworks

Languages: PHP, JavaScript, Java, C#

First time I really thought about frameworks was when I first started working a web service running on ServiceStack. I had to port it to Java using Jersey. Had to learn all about how it worked, how to write code for it. 

Then, I did some personal web development in PHP which led me to Yii, an MVC framework... now I learned MVC.

Next, node.js was new and really popular a few years ago, and I tried to pick it up (but by that time, my interest in serious web development was fading..) But anyway I got a bit of that and led to learning event-driven programming and (that would lead to a whole bunch of things...)

But anyway, you start learning that frameworks are useful for reducing redundant boiler-plate code because they already do it for you; you don't need to start from scratch. 

So now when I start a new project, I usually think about what needs to be accomplished and if there are any frameworks I should use to do it faster or better.

Class Libraries

You learn after awhile that a lot of code is redundant and not to "reinvent the wheel" whenever possible. Energy and time are limited and should be spent on more interesting things (which is also something that a lot of people don't understand... see rant above).

And with class libraries, you also learn to write more generic and smaller functions which allows you to reuse the same code over-and-over again in your application. This has many other benefits like testing, not having to change the same thing 50 times, etc. The caveat is you better have very good, robust unit tests to ensure that changes don't impact existing functionality... hm... sounds like automated unit tests... (again, one thing leads to another...)

In fact, now I've been building a few of my own for reducing redundant code in my projects. It does have some problems though as all my projects are linking to the working copy but at the same time I don't want to have a static DLL reference for each... maybe I should but then it makes it harder to change things on -the-fly...


MVVM

Well this a C# thing to begin with but it's a culmination of several of these concepts, and the fact that Visual Studio 2013 now has a very good (read no bugs, extremely good code-hinting) WPF editor.

The other contributors were:
  • MVC, frameworks in general
  • Delegates and events which led to learning Func, Predicate, Action, which led to INotifyPropertyChanged (the foundation of MVVM and the whole WPF concept) and ICommand

Collection Initialization, Structs

Languages: JSON, JavaScript

A lot of times in JavaScript, configurations are passed in a JSON object. Up until then, I tended to initialize static collections that usually contain configurations or static values which a bunch of Add commands.

Now I tend to initialize them in-line and in my opinion it's more intuitive and readable.

You can do something similar with structs although it's a bit confusing: The whole value vs. reference thingy and immutability.. don't think I still have a complete understanding of it yet but I use them mostly to pass a set of configuration values or data that won't be changed. 

Lastly

Programming Methodology

You get exposure to different ways of thinking about programming and you know they exist so you can evaluate when to use one over the other.
  • Functional programming (Scala, Clojure)
  • Event-driven, asynchronous  programming (node.js)
  • Imperative programming

Spent more than an hour writing this so... got kind of tired near the end.

Friday, December 26, 2014

How Amazon Gave The Tech Community An Early Christmas Gift

I didn't want to post this until the sale was over, which it now seems to be... guess it was only good for Christmas or enough people caught on.

Basically the $200 unlocked Amazon Fire phone was a very nice gift to us techies and anyone that have dabbled in tinkering with their phones.

After a couple of fairly simple software changes, you could've gotten a phone that performs just as well or if not better than the $600 crap Samsung and other (less crappy) devices makers build.

For a regular user, it could even be on par with the OnePlus, because let's face it, how many want such a large screen or need SuperUser permissions?

The key is enable the install of third-party apps which is found in Settings somewhere. Something to the sound of "enable applications from unknown sources".

Then, install Google Services and the Play Store.

Download the APKs here and install them in the below order, Restart the device after each install

http://www.epubor.com/how-to-install-google-play-on-kindle-fire.html

  • Google Service Framework
  • Google Login Service
  • Play Service
  • Google Play Store
If all goes well, the first time you open the Store, it will ask you to login to... Google!

After that you can install all your Google apps and Chromecast even works too!

Also, now you can download the launcher of your choice. For a more authentic Android experience, I recommend Nova or Google Now.

Voila! You now have a $300-600 phone for $200. And if Prime has any value to you, even less!

Warning: It is possible that this will not work with Fire OS higher than 3.6.8 and Amazon could cripple the Google apps with an update but I think that would put it into legal hot waters.

And now since developers have the phone at such a cheap price, more attention is going into rooting and modding it so in the future, the Fire Phone may even run a non-Amazon operating system.

Tuesday, December 9, 2014

Getting Started with WinDbg

Recently one of my applications started freezing up on our Production servers. It was impossible to figure out what was going on from the logs and reviewing the code seemed to only show that such behavior should be impossible. So I took a crash dump and used WinDbg to analyze. Turned out it seemed to be caused by one of the libraries our origen used and that was recently upgraded. Simple, right?

...Not quite, I skipped the part where it took me more than 2 days to figure out how to properly take the dump and get it running inside WinDbg.

So to save other people the trouble, here what you need to now and how to get started.

First, you'll need to get the program which comes in two versions, 32 and 64-bit. The one you need depends on the version of the program you are debugging, not the operating system it is on.

On a 64-bit Windows, you can tell this at by looking at the process in Task Manager. A 32-bit application with have *32 next to the process name in the Details tab.

The WinDbg installers though are part of the Windows SDK package which is downloadable on the Microsoft site.

However, the web installer wants to install the entire SDK but really you just need WinDbg. So...


You can download the ISO here and after mounting it, there should be a folder with the WinDbg installers, thus avoiding the need to install the SDK.

This is for the Windows 7 SDK but for other versions there should be something similar; just search around for it.

OK, so let's say you need to debug a 32-bit process running on 64-bit Windows 7. However, you will doing the analysis on another machine (running a different version of .NET).

So first, you need to take the process dump. To do this, I recommend you use SysInternal's Process Explorer (procexp.exe). If you Google it, you will easily find it (will put a link up eventually)

This avoids the issue of having to run the 32-bit version of Task Manager because it's too stupid figure the correct format of the dump file in the 64-bit program. ProcExp avoids the whole issue because it's smart :) Just right-click on the process and select Dump --> Full Dump

I've never tried mini dumps, but more is probably better.

No once you have this file, you need to copy it somehow to your debugging machine. Also you need to copy 2 files from the machine:

  • SOS.dll
  • mscordacwks.dll
Again, depending on the bit-ness of the application and the .NET version they are in either:


  • C:\Windows\Microsoft.NET\Framework64\{version}
  • C:\Windows\Microsoft.NET\Framework\{version}
Copy the correct files your machine.

Now start the version of WinDbg based on the application's bit-ness, in our case 32-bit.

Then, drag the dump file into the program. This will load it.

Now configure the Symbols Path using File->Symbols Path, and enter:

SRV*{local path}*http://msdl.microsoft.com/download/symbols

The {local path} should be a folder; it does not have to exist.

Now you need to load the process PC's .NET environment, specifically the two DLL files.

Execute .load {full path to file} to load the two DLLs.

Then type !analyze which will begin analyzing the dump.

If you get an error saying something like the CLR version does not match, most likely it is because it automatically loaded this machine's environment which overrides the custom set ones.

If you run .chain it should show the first entry as something related to .NET. Also make sure the two DLLs you loaded are under it.

Type .unload to remove the entry and analyze again. This should not have any issues now.

And now you're all set up!

Here's a few cheat sheets to get you started:

http://windbg.info/doc/1-common-cmds.html

http://theartofdev.com/windbg-cheat-sheet/

Also here's the link to the SOSEX extension which adds some pretty useful features, in my opinion.

http://www.stevestechspot.com/

You can load it in a similar manner as the other DLLs.

Pictures to come at a later date... or never.

Keeping It Simple

Complexity is a great demotivator... For example:

Complexity: You have a 40 minute manual build/deployment process where a single error will set you back to the beginning? You're not going to doing builds too often and when you do, it will contain a large number of changes which if any issues arise will be some times be impossible to figure out what the root cause.

Solution? Automate the deployment process as much as possible. (NAnt, Ant, write some utility programs)


Complexity: You keep your shopping, to-do list, or a large amount of important information in your head. I guarantee you will forgot things and drive other people mad.

Solution? Write things down and organize them, because if you can't find it, it's the same as not writing it down... which leads to...


Complexity: You keep things everywhere in an unorganized manner such that it takes much longer than necessary to find things when you need them.

Solution? Find or develop some organization system with fixed rules which significantly narrows the area you need to look in.


Complexity: You have way too many blogs such that it you spend so much time trying to decide which one to post in that you don't post anything at all (Raises hand)

Solution? Well so far I decided to just pick one and move all the other content here. I'm still figuring it out...

Update... Yea so on this one, apparently I couldn't make up my mind and for some other reasons, now I moved again... (I promise this will be the last time... I hope)

Sunday, November 9, 2014

How to Publish Existing Projects to GitHub

I had spent an hour or more trying to publish my existing projects to GitHub the first time doing it, so this is for anyone that wants to avoid the trouble and is a completely new to GitHub and Git.

First you will need the git utilities which can be found here: http://git-scm.com/

I assumed you are using Windows and installed it to the PATH as well; otherwise you will need to use the Git Shell.

You also need the GitHub app, which you should know where to get (hint: github.com).

This will be the easiest way to do it.

You can just use git shell but you'll need to manually register the machine on GitHub and use the git's add and commit calls. Also the GitHub's some nice additional features like code comparison and a GUI (I know GUIs sometimes get a bad rap from power users but sometimes they actually make things easier and shell)

The steps below are assume you:
  • have the above set up, obviously
  • have an existing C# project, but any project would be similar and you can probably figure it out
  • creating a new GitHub repo for the project
First open a command prompt (or git shell) and go to your project folder. This folder is has either the *.sln, If you're solution only has 1 project though you can use the folder with the *.csproj as the root.

I sometimes have projects with multiple child components each with their own .csproj, but the solution puts it all together and they really should be treated as 1 project. So for those, I would use the first choice.

In the folder, type and execute git init 

This creates a new local repository in the folder. Note the repo will be named the same as the folder it is created in.



In Explorer, go to the folder containing the project folder (1 level higher than the folder in which you created the repo)


Drag the folder into GitHub which will then list the files in the folder and sub folders.


You will see the files that are added, most likely you don't want the build files like bin/*, obj/*. You do this by creating a .gitignore file but you can use GitHub to do it for you:

With the new repo selected, on the top right, you should see a widget icon which when clicked it will display a menu. Click on the "Repository Settings..." option.


You should now see 2 columns, and large icons that say "Create ..."

Click on them and they will create the default files. I've never touched the Attributes but you may need to edit "Ignored files" if you have some custom files you want ignored.



You can add individual files with their exact name, or folders which need to be in the format: <folder name>/

You can also use * in the as a wildcard so "myFile*" would exclude any file beginning with "myFile"

"myFolder*/" would match any folder beginning with "myFolder", and all its subfolders as well.

Now once, this is done, exit the settings and the Files To Commit list should update appropriately.

When you have reviewed all the files, commit it to master by first filling out at least the Summary field (when that is filled out, the Commit button will be enabled).

You have now added the files to your local repository. You need to click "Publish Repository" for the project to be created on GitHub.


Then, the button will turn into "Sync" and you need to click this for your files to be added to GitHub.

Again, "Sync" actually copies the files (not Commit, not Publish Repository).

For subsequent commits, you follow a similar process, "Commit to master" will commit the changes to the local repo, "Sync" will send them to GitHub.


Bulk Rename Utility 1.0

Recently wanted to rename a bunch of similar video files and thought it would be nice to do a bit of programming (pretty much all weekend... but it was fun) to do it. The result is Bulk Rename Utility.

The program allows you to rename a bunch of files at once based on Filters (or rules).

Currently there are two types of filters but for the most part you only need RegexFilter anyway. TrimFilter I guess is more just to prove that multiple filters work :)

The RegexFilter searches the name for the parts that match the regular expression in SearchFor and replaces with the text in ReplaceWith. To delete the matches entirely, just leave ReplaceWith empty.

Also, if you have files loaded, it will use the first file's name as the default Preview text.

The TrimFilter trims a single character from the beginning and end of the name, not very exciting I know.

Note that filters are run in order, so if you had:

Original String: Hello World
Filter 1: World => Apple
Filter 2: Apple => Orange

The result would be "Hello Orange". And if you flipped the order, it would be "Hello Apple"

The program also allows you to save the filters so that it can be used again on other files.
Finally, you can remove multiple files from the list at once by using CTRL to select multiple items and clicking "Remove Selected".

Download: https://drive.google.com/folderview?id=0BwHjtARwf-GFV0ZidHplMF85TFE&usp=sharing

GitHub: https://github.com/allanx2000/BulkRename

Also all applications are built for .NET 4.0 Client Profile, unless otherwise stated. I've only tested it on Windows 8 x64... because I'm too lazy to install a x86 VM but it should work as it is compiled targeting Any CPU.

Saturday, October 18, 2014

Using var and temporary variables

Very often, I use short variable names such as 'i', 'ctr', 'idx'. As you can tell from the name, generally these are used for iterating through a collection.

For me, I will use one of these if it's something quick and easy such as a fairly simple sort or filtering. Generally these are short (there are maybe one or two lines of code) and there aren't any other variables used.

Similarly for 'var', a C#-specific feature which I tend to use when working with LINQ but don't really need to keep the intermediate results. If I need the results, I usually convert to a List and that has a fairly descriptive name as well.

I also use it in GUI programming with Button events. Again, it's only a few lines of code and I only need it a few times at most. Most of the time, it's just created so I can call Show(). You could just create it without assigning it to a variable but it's not that at all difficult and could come in handy later if debugging is needed.


I would not recommend these for any code that spans more than maybe 1/2 a code page or for things that you can't understand what the code is doing within a few seconds, like algorithms.

Speaking of algorithms, some people like using names like 'a', 'b', 'c' but most algorithms are long and/or hard to understand. Usually for things like nested loops or involve a lots of other variables, I use better names.

For example, iterating through a table, I will use 'row', or 'col' if I am doing a lot with the data. If it's just a simple copy, probably 'r', 'c' is good enough.

But going back to algorithms particularly, I try to use names that are more descriptive so that people, myself included when I'm reviewing the code, don't need to spend 10 minutes figuring out what each one means.

I have spent a lot of time walking through code to figure out what it does because other developers have done and furthermore they don't even leave any comments. That's another thing, if something is complicated, or goes against usual conventions for some reason, leave a good comment that will let others or yourself immediately understand what's going on.

Why use readonly (versus const)

I was playing around with Resharper today and ran a code analysis on one of my projects which came back with 430 code issues. One of the suggestions said there were fields I should mark as 'readonly'. And this then reminded me of this question I've sometimes been wondering.

Today I finally Googled it and it resulted in a "Oh yeah..." moment. The top answer in StackOverflow being:
The readonly keyword is used to declare a member variable a constant, but allows the value to be calculated at runtime. This differs from a constant declared with the const modifier, which must have its value set at compile time. 
 I've run into this problem many times before where I try to assign DateTime (for something like Today) or a List (of required names) into constant variable. Up until now my solution was just remove the 'const' modifier.

Also readonly allows the class to assign it during the constructor so if for example you need one instance, which should never be reassigned after initialization, but need to also do some set up work, readonly would be a good use for that... if the object is used within the class only.

If the object is also public though, you make want to consider using a getter instead like in the singleton pattern.

I usually use readonly on ViewModels as when a window is initialized, these are created but should never be replaced afterwards. It's probably a no-brainer but just for some added insurance, you can use the readonly modifier on it.

And Resharper made my Visual Studio really slow... so bye bye Resharper!

Sunday, April 20, 2014

Finding an Excel (XLSX) Generation/Library for .NET (C#)

I've been trying to generate an XLSX document in Excel but so far not getting anywhere.

  • Excel COM/Interop - I'm not 100% sure but I think in order for anyone to be able to use this, they would need a version of Excel installed... so no, I don't want any MS Office dependencies...
  • EPPlus - Looks buggy, after partially generating the first row, all subsequent changes never get put into the document...  Actually I screwed up and ommitted 1 line of code in an iterator function.... 0rz 
  • MS OpenXML SDK - I don't know if it's just me but the documentations are horrible and no working sample programs are provided. I spent an hour trying to use it and integrated it into my project, but after it finished generating the files, Excel says it's corrupt....
  • SimplExcel - This one works as well although you need to get it from NuGet which may be a bit of a hassle. It is, however, the easiest to use out of all the ones I tried. The documentation is also very clear.

Saturday, April 19, 2014

Android NAND dumps via adb

An old post... may be useful, may be not. The issue I've had since is how to restore it... although wiping the phone clean made it somewhat faster, and I just restored the files and programs using Titanium.


http://android.stackexchange.com/questions/28296/full-backup-of-non-rooted-devices

Took me awhile to dig this back up. I don't usually take NAND dumps but now need to update CyanogenMod on my phone again and I think I may need a full flash (wiping the device and everything). Seems like after it upgrade to CM11, the Camera has been really buggy (crashing very frequently...).

Anyway, this provides the Windows (and Linux) commands to run to get the NAND dump using ADB.