Thoughts on the Angular Material Datepicker

While researching how to make the dreams of developing a countdown clock Angular application for the final project of Software Construction, Design, and Architecture, I came across an interesting writeup on the Angular Material Datepicker by one of the Angular Material developers, Miles Malerba. With plans of creating a user-inputted countdown timer, a datepicker component sounded like welcome alternative to making one from scratch. I decided to look further into the Material Datepicker to see if it would be something that could prove useful.

The Material Datepicker includes support for the required attribute, which is used for data validation when a form is submitted. This seems like a worthwhile feature, as it would make little sense to allow the user to create a countdown timer without inputting a date to countdown to. The datepicker also has an additional mdDatepickerFilter attribute, which allows for “finer grained control of what’s considered a valid date.” This also seems like an important feature for a countdown timer input, as I would want to disallow users from selected a date in the past, as this would be invalid to count down to.

While I had not previously thought about supporting mobile users with my countdown timer application, the Material Datepicker’s mention of a specific “touch UI mode” made me reconsider. I think that a countdown timer that is tailored mobile users would be an important audience to appeal to. Perhaps mobile users would have more use for a countdown timer on their phones than on the computer. I will have to look into the possibility of supporting mobile users.

While it does not apply to my project, I thought that the Material Datepicker’s DateAdapter and support for any locale was an interesting addition. The DateAdapter is an abstract class that allows developers to specify the formatting of dates, which allows for representations of 1/2/2017 to mean January 2nd, 2017 in America and February 1st, 2017 just about anywhere else. Since my project will only need to include support for the American date representation, the included NativeDateAdapter class should fill my needs. This class uses the Javascript Date to represent dates, which is the American version mentioned earlier.

In conclusion, I think that Angular Material Datepicker will certainly help in the development of my Angular Countdown Timer Single Page Application (SPA). Having a datepicker component that is already written will allow me to focus on the more important aspects of design, such as allowing users to save their countdown timers by implementing database calls. While there is certainly still much work to be done on my Angular SPA, reading about the Angular Material Datepicker has me excited to get started developing.

Practical Uses for Design Patterns

Sometimes completing assignments can become monotonous. I think that I find this mainly happens when I cannot seem to think of a practical use for what I am currently working on. While I understand that many of the example implementations of design patterns are intentionally left abstract so as to highlight the importance of the pattern rather than the complexities of the underlying system, this bores me. My approach to programming is often utilitarian in the sense that I want to know how what I am currently working on is going to make someone’s life easier.

(Image source: https://drquicklook.com/products/usb-to-sd-card-adapter)

This week I listened to Episode 30 of the Coding Blocks podcast, from July 26, 2015. In the episode, Allen, Joe, and Michael discuss the Adapter, Facade, and Memento design patterns. The first pattern that was discussed was the Adapter pattern. I paid particularly close attention to this pattern, as I will be researching and compiling an informative piece on the Adapter pattern in the coming weeks. The podcast provided a link to an excellent tutorial on TutorialsPoint, which I will most definitely be using as a reference for my project research. The real-life example used to describe the Adapter design pattern was that of the SD-card adapter, which takes the SD card and adapts, through the use of an interface, to a USB cable that the computer can recognize and use. The Adapter design pattern implemented in software provides a very similar function by taking two otherwise incompatible interfaces and acting as a bridge between them so that they may seamlessly interact with one another.

In the discussion of the Facade design pattern, I saw some parallels to the Adapter pattern but there were certainly also observable differences. The Facade patterns aims to hide the complexities of some underlying system by providing a simplified interface through which the user can gain access and use the resources provided by the system. The example that the three discuss that I found very interesting was transactional payment processing systems such as PayPal. The goal of applying Facade in this case would be to hide multiple repeated calls to APIs that must be completed each time a user would like to perform a task such as setting up a secure connection, passing a token, storing a token, etc. before actually accomplishing the desired task.

The final pattern that is discussed is the Memento design pattern. While the three seem to have mixed feelings about the usefulness of the Memento pattern, I thought that the discussions regarding Megaman and System Restore’s implementation of this pattern were extremely useful and interesting examples. The pattern, in a basic sense, aims to save a complete copy of the state of the object at any given time. This state object is accessed and maintained by two classes – a caretaker and an originator.

What I like most about the examples and explanations that the three give for their respective design patterns is how practical they seem. While the ultimate goal of applying a design pattern to a particular problem is to simplify the overall implementation, it is certainly not always a simple task to apply a pattern. Understanding some of the motivation behind why applying the design patterns makes an implementation cleaner and more effective satisfies my utilitarian inclinations. I am looking forward to exploring the complexities of the Adapter pattern more thoroughly in the near future.

Predictive Applications and the ‘Datafication’ of Everything

We live in a world where we are constantly being bombarded with information. Not only do we consume insane amounts of data, we are also providing other people and businesses with information about ourselves. Signing up for online mailing lists, ordering magazine subscriptions, and even making dinner reservations, information about our habits and preferences is constantly being left behind, a concept that Charlie Berger refers to as data exhaust in a podcast from October 10, 2017 on Software Engineering Radio. The larger concept that he is describing is what is known as ‘datafication’, a buzz-word in the data science and big data spheres that refers to the collecting and storing information about social actions that can be used to perform predictive analyses and targeted marketing.

Specific to the computer science discipline, datafication has implications on the development of predictive applications. In the podcast episode, Berger presents the simple yet extremely effective example of an ATM machine as lacking in the predictive application sense. Berger wonders why each time that he uses the ATM he is asked which language he would like to use, and why such preferences are not somehow tracked and stored, making for a more seamless and personalized ATM experience. Berger even suggests that the ATM track more than language preferences, offering withdrawal suggestions based on previous transaction data from a similar day of the week or time of the day.

While it may not be terribly inconvenient to have to choose a language each time you use the ATM, the concept of predictive applications and the advantages associated with creating and using these types of applications becomes much more apparent when considering larger-scale operations. Retailers can use predictive applications to make important decisions about things like advertising and merchandising. Berger mentions the well-known “parable of the beer and diapers,” where an interesting and entirely unexpected correlation was found between purchases of diapers and beer. While some versions of the tale include the retailer moving the two correlated items next to one another in order to drive increases in sales, this may or may not be factual. Regardless, such examples of generating useful information based on querying data is a perfect example of the power the predictive applications have.

Berger repeatedly stresses the importance of moving the algorithm to the data, not vice-versa. By moving the algorithm to the data, we avoid all of the dangers of bypassing security and encryption. Developing applications that perform queries and compile information that is usable and useful to not only data scientists, but normal people as well, is a perfect example of how machine learning and predictive applications can make everyones jobs easier.

As a student, I took one of Berger’s closing remarks under careful consideration. Berger states that it is much easier for a programmer to learn how to make a program that interprets data than for a data scientist to translate his specific, one-off analyses into programs. With a newfound understanding of why predictive applications are so important to our data-obsessed society, I look forward to exploring how I can begin developing applications that take advantage of machine learning.

The Place For Tools in Development

Especially for new or inexperienced programmers, tools can be a great way to help get the ball rolling or learn how to create programs that work. Too often, however, programmers rely on their tools to think for them, a dangerous and often damaging decision. A post by Robert Martin on his Clean Coder Blog titled “Tools are not the Answer,” explains potential causes of the impending “software apocalypse” and also points out some common mistakes that developers should avoid. Martin acknowledges the value of tools and technologies such as Light Table, but feels that such tools are not going to solve the apocalypse. Tools only further complicate things rather than addressing the underlying cause, which Martin cites as software programmers being generally undisciplined.

Rather than trying to fix bad code with more code, Martin thinks that we should simply aim for more disciplined programming. The reasons he gives for the cause of the apocalypse are:

  1. Too many programmer take sloppy short-cuts under schedule pressure.
  2. Too many other programmers think it’s fine, and provide cover.

I feel that Martin’s first reason is more significant than the second. While often times deadlines are outside of the programmer’s control, the choice to take a short-cut that jeopardizes the integrity of the code is a conscious choice. Avoiding this dangerous mistake may require extending deadlines or missing them altogether. Weighing the risks of releasing an inferior product with delivering it past its original deadline may depend on the product’s application. Reputations would certainly be more severely impacted by the former, while the latter may cause only minor inconvenience to the end-user.

I don’t see the second reason Martin states as so much of a problem. I would argue that other, more experienced programmers should help to implement the feature properly rather than allowing an overwhelmed programmer to sloppily stumble through a buggy implementation. Martin seems to think that tattling on the sloppy programmer is the solution to making sure that he pays for his carelessness. I think that in any team-driven environment, colleagues should have one another’s backs and everyone should be accountable.

While I stand behind Martin’s opinion that the real reason behind the impending software apocalypse is a lack of general discipline among programmers, I only partly agree with the causes he proposes for this lack of discipline. I think that more importantly than anything else, the programmer must consider the risk he or she is taking by rushing through something without proper and rigorous testing. Some of the examples of software bugs that caused panic and chaos are found in “The Coming Software Apocalypse,” which is the article that Martin continuously refers to in his own blog post. While the code that I am presently writing does not have any real-world consequences (apart from a poor grade if it does not meet the requirements of the assignment), I am challenging myself to write code as if someone’s life depended on the reliability of what I write. Who knows, someday it just might.

Turning the Big Ball of Mud into Modular Code

What Konrad Gadzinowski describes in the opening paragraphs of his post on “Creating Truly Modular Code with No Dependencies,” the “emotional rollercoaster” of developing software, is something that I’m sure anyone who has ever written a program has experienced. I certainly encounter this each time that I’m writing code for a project, whether it is an academic project or a professional one. Eager to begin a project, I often dive in and begin completing the simpler parts first. During this time, it seems that progress moves very quickly. After all of these easy, simple pieces are done, however, progress seems to slow or stall. As the requirements become more complex, I often find myself going back to previous code and rewriting things so that they integrate more seamlessly with the new element that I am adding. This problem is what Gadzinowski describes as the “big ball of mud.” Gadzinowski provides Apache Hadoop as an example of a program with the ball of mud interdependencies that slows further development and makes tracing the source of bugs more difficult. In the image below, each class is represented as a point on the outside of the circle, and the lines between the points are representative of a dependency.

(Image source: https://www.toptal.com/software/creating-modular-code-with-no-dependencies)

With so many interdependent classes, I imagine that untangling the web to trace bugs in Apache Hadoop would be a nightmarish task. Gadzinowski offers a solution to the problem of the ball of mud, however, that seems like sound advice. His suggestion is to use the element design pattern when developing software. This modular pattern aims to create reusable pieces of code that are independent of other classes. This is done through the use of element classes and element listener interfaces. In this way, all of the required dependencies for an element are encapsulated within that element. Outside classes that wish to utilize the element are not concerned with the underlying design of the element, they interact with the element’s listener. Gadzinowski presents this as a way to increase the flexibility of the element, allowing it to, for example, output to any number of different external environments through an identical listener call.

While I was immediately willing to listen to the post’s advice after it described a miserable situation that I’ve encountered countless times, I think that reading Gadzinowski’s explanations and examples of the element design pattern has certainly made me a believer. I think that what makes him so credible is his willingness to acknowledge the value in initially jumping into design without worrying too much about the big ball of mud that you may be creating. While this may not be the solution for a final release, it can get the ball rolling and allow for the element pattern to make your code more reusable and stable for production releases later on. I will keep Gadzinowski’s advice in mind the next time that I begin to worry that I have too many interdependent classes to make my classes reusable or easily maintainable.

Is ‘Agile’ really agile?

The Agile software development methodology is based on the “Manifesto for Agile Software Development,” which outlines the values and goals of the platform. For many software development teams, an Agile methodology has replaced the dated Waterfall method. I think that the diagram below does an excellent job of highlighting the key differences between the two methodologies.

(Image source: https://www.seguetech.com/waterfall-vs-agile-methodology/)

The Agile method allows developers more flexibility and involvement in some of the stages of the development that were previously dominated by managers and other higher-ups with no connection to the code itself. In cases where getting a working prototype of a project deployed quickly is of primary importance, the Agile method is the clear choice. In Agile development, responding to changes in the program specification can be done relatively simply through regular meetings and discussions of progress.

The more traditional Waterfall methodology follows a linear sequencing, where each step must be completed in order before the next step is begun. This means that there is often a longer period of development before any product is ready to be deployed. When the product is deployed, however, it will often be more polished and complete. The Waterfall methodology does not respond well to changes in the specification, as this will often require backing up in the process and then reworking each of the steps.

Now, with a general idea of the two methodologies, I could begin to understand where user ayasin is coming from in his rather intense post titled, “Agile Is The New Waterfall.” The post on Medium.com generated quite the buzz of controversy, and even attracted the attention of well-know computer science figures including Uncle Bob. In his post, ayasin argues that Agile has become the tiresome, outdated successor of Waterfall. While he does not offer any solutions, he sure presents a lot of problems with Agile. Ayasin describes the Agile development process as follows, “You just throw stuff together as quickly as possible because you know it’s mostly trash anyway.” This hardly seems like a way to produce quality software. What’s more, ayasin argues, is that more of the responsibility (and potentially blame) is placed on the developers themselves, as they are given the illusion of involvement in the process without any real control of the outcome.

Before finding ayasin’s post on Medium.com, I had a vague idea of the Waterfall and Agile methodologies. After a bit of research of the two strategies, the post seems to make some excellent points. While I agree with some of them, I’m not sure if ayasin is being a bit harsh on Agile. It would seem that when properly implemented and followed, the Agile methodology has significant advantages over the traditional Waterfall method. Reading about the two methods has given me insight into some of the challenges I can expect to face when working on a project in the future. I feel nervous but prepared for these potential challenges and look forward to someday working on projects like the ones described in my research.

When Object Oriented Programming (OOP) becomes Programming fOr Others (POO)

Something that has certainly been engrained in my programming brain is that code should be as easy to reuse as possible, and this is done through the use of objects in what is known as object oriented programming. In my very first programming class I used Java and most of the academic programming that I have done since that point has also been in Java. Java is a self-described “general-purpose, concurrent, strongly typed, class-based object-oriented language.” As a result of its object-oriented nature, one cannot learn to program effectively in Java without learning how to program in an object-oriented manner. While object-oriented programming can often allow for the efficient reuse and maintenance of code, it may also overcomplicate things in certain instances. Knowing when and where to step away from an object-oriented approach to programming can be important to creating something that is easy for others to understand and build from.

In a Coding Horror post titled “Your Code: OOP or POO?” from March 2007, Jeff Atwood explains why programming in a way that considers fellow programmers who may work with your code after you is more important than mindlessly creating objects for the sake of creating them. Atwood goes on to explain why it is the principles of object-oriented design that are truly important. These are things like encapsulation, simplicity, and the reusability of your code. Atwood stresses that if you attempt to “object-ify” every concept in your code, you will often be introducing unnecessary complexity. He uses an interesting metaphor that compares adding objects to adding salt to a dish – “a little goes a long way.”

It must be made clear that Jeff Atwood and all of the other programmers that he mentions in his post are not against OOP. Rather, they are against the abuse and misuse of OOP by those who do not understand where and when creating objects is beneficial and where it is simply cumbersome or clumsy. Object-oriented programming is an extensively powerful tool for creating projects that are reusable and easily maintained or changed. What is important to take away from Atwood’s post is that it is the way that new programmers are being brainwashed into thinking that every piece of code that they write must somehow become an object lest it be poor programming is what actually causes problems. Although never directly stated, I took Atwood’s post as a call to educate new programmers about the potential pitfalls of writing overly complex object-oriented code in place of a simpler alternative that does not involve objects.

Why Repeatedly Repeating Code is Bad Programming Practice

After a discussion with a friend about the recent eclipse, the subject of the apocalyptic end of the world came up. I was reminded of Y2K, and decided that it may be worth some research, as I was too young at the time to really understand what was truly going on. As a student of computer science, perhaps it would provide me with some important examples of things not to do in my own coding. On a blog post written by Steve Rowe for Microsoft Developer, he shares what he learned from an instructor about the true cause of the Y2K scare, a lack of implementation of the DRY, or the Don’t Repeat Yourself principle. Y2K was caused not by mistakenly representing a four-digit year with too few digits, but by making this error over and over throughout and across multiple files. Unless absolutely necessary, code with identical or near-identical functionality should not be duplicated. Following the DRY principle makes maintaining and repairing code easier and simpler; it is important that those striving to become excellent programmers follow this principle.

While my mistakes are not going to cause the same devastation as the mistakes of the developers that caused the Y2K scare (yet), they have certainly caused me a great deal of frustration while programming for assignments or personal projects. On more than one occasion, I’ve found myself repeatedly trying to remedy a certain piece of code, only to find out later that the error was caused by similar code that was implemented elsewhere. It was this duplicated code that was actually responsible for the error, not the unused or irrelevant piece that I had been wasting time attempting to correct. My failure to follow (or even be aware of) the DRY principle, which I was unfamiliar with before looking over the syllabus for Software Construction, Design and Architecture has resulted in countless hours of wasted time and energy. Any programmer, no matter how good he or she may think they are, could always stand to improve. Not only will following the DRY principle allow your code to be more easily understood by others, it will make writing documentation and performing any maintenance much simpler. Steve Rowe makes an interesting comment before closing his post, stating that, if duplicating code is deemed necessary, “It might not be a bad idea to put a comment in the code to let future maintainers know that there’s similar code elsewhere that they should fix.” If we all attempt to better follow DRY and Rowe’s advice, maybe we can avoid future Y2K-esque scares.