ORM’s Dreaming Big: Pt 1 (Schema)

So I’ve came across waterline. Waterline is unique in that it isn’t just an interface to sql or mongodb but anything.* Anything*. (*Anything that someone has made an adapter for it). Unfortunately my experiences with it have led me to believe it is not a good fit for a full stack outside of sails. I’ve written more about my issues with the framework here. Now, I’m aksing for a lot therre so I don’t expect it to go anywhere. But I made me have a three day coding sprint of trying to develop my own ORM the way I wanted it. As I continued, I realized it is a ton of work alone and not all of the things I desire are available to mooch off of. But the Idea… The Dream… That can live on…

What would it be used for?

This is one of the reasons why I wrote this post. Waterline is awesome. However, It left me wanting more and questioning if supporting too much is giving me less. As a result, the interfaces I believe are of upmost importance to provide compatibility with is.

  • Memory – This was a proof of concept by them, however it can be implemented in a fast manner. Libraries like Lazy.js will compile all the arguments so it s only ran in a single loop. Additionally, supporting proper indexes can add even more speed to it. However, the point of being able to use in memory is so that you can create “collections” easily, beautifull and query them as you would anything else
  • LocalStorage – This is another clientside feature that can be implemented. LocalStorage is an interesting beast but nonetheless quite manageable. WHat you would do is store each everything to start with connection/model where connection and model would be the connection and model name. From there you would store indexes under connection/model/indexName and probably each object in its own place such as connnection/model/ObjectID. This will allow you to not load too much at once and be able to asyncrnously retrieve objects as you need them instead of loading everything into memory and hoping all goes well.
  • HTTP- By providing a wrapper to create HTTP calls, you can interface with your database easily as if it was again on the server. Of course a serverside implementation is also necessary, however I think that’s relatively simple in the long run. Perhaps that begs the question of creating “Users” that can interface with your ORM.
  • FileSystem – Mongodb is the standard, without a doubt. However, I’m a strong believer in diversity (when it’s convenient to say I am). As a result, creating a filesystem document based framework doesn’t see too far off or out of line. It would most likely be quite similar to localStorage actually
  • MongoDB – Mongo in many ways provided the breakthrough db. Might as well still be able to interface with it

Sugery Snacks

The Validator

The validator Is without a dote one of the most important features to any database. The purposes of a validator are not just for the databases purposes but also for the clientside. When creating a form, being able to just hook in and apply a validator is without a doubt one of the sweetest things possible. Unfortunately, those validation parameters generally are not included in the database as well. This is partly a good thing since you don’t users to see all of your internals, however for some code rewriting your models is kind of a pain in the ass. Just tedious work. As a result, here is the first commandment

  • A Validator can also be easilly hooked into any form
  • A Validator can also easily be used to generate forms

The second part is obviously much more complicated as you can see here and here.  But we’re dreaming big here right? No holds bar. Whatever we damn well please. And I would be pleased to not have to rewrite code for every single form I ever come up with.

As for Creating it, Here is a Laundry list of features….

“Native” Types

The Native Types I would prefer to keep as simple as possible

  • Number
  • Buffer
  • String
  • JSON
  • Any
  • Typed ObjectID – Can specify a specific Model(s) allowed.
  • Typed Array – Can Specify the Type of Array it will be (Any is also allowed)

The reason long, date and others are not supported is because those will end up being compiled to these native types anyway. The Object Id is the only thing that’s really different. Numbers, HashMaps and ObjectIds are the only thing that cannot be evaluated to an array.

to use these

prop1:Number, //You can provide the object class
prop2:"buffer" //You can provide a string specifying the type
prop3: ["string"] // You can create a typed array
prop4: {
  native:Object //you can specify the type explicitly
}
prop5: Framework.Types.ObjectID //This specifies any other document
prop6: "objectid:modelname" //This specifies that you expect it to use another model
prop7: AModelClass, //This specifies that you expect to use that other model. This will be the same as above
prop8: {
  native: "objectid"
  model: "modelname" //Specify the type explicitly
}
prop8: null, //Specifies Anything
prop9: FrameWork.Types.Anything //Specifies AnythingAswell
Additional Types

You may also use custom schematypes. However, schematypes will not have all the features that a validator expects. In addition, you may also provide anything that you would write in a custom schematype within the schema as well.

To use a custom schematype

prop1: {
  native: CustomSchemaTypeClass
}
prop2: {
  native: "customschematypeclass"
}

If you provide a string, your validator will create a dependency on that schematype. This means if that schematype is not available in your framework, until it is your model cannot be used. Below cannot be used with Custom SchemaTypes and only available to the Schema. Additionally, you can provide custom options that will override the SchemaTypes original Options

prop1: {
  native: "customschematypeclass",
  a_custom_option: "a value"
}
prop2: {
  native: CustomSchemaTypeClass({
     a_custom_option: "a value"
  })
}
Basic Validators
  • Required – Cannot be null or undefined
  • Final – After first created, cannot be set again
  • Unique – this will also create an index.

These are simple and straight forward

Default

At times you may want to provide a default. And now you can in three different ways

property:{
  native: [String],
  default:["value"],
}
property2:{
  native: [Number],
  default:function(){
    return Math.random();
  }
}
property:{
  native:[ObjectID]
  default:function(next){
    Query().find({something:value}).exec(next);
  }
}

 Validators Available in Custom SchemaTypes

The following are available to use within your Schema as well as SchemaTypes. Schematypes are interesting in that they can be extended indefinitely however only return the Schematype. THis is done because of the following

function CustomSchemaType(options){
  if(!(this instanceof CustomSchemaType)) return new CustomSchemaType(options);
  this.options = options
}

CustomSchemaType.prototype = function(options){
  options = _.merge(this.options,options);
  return new this.constructor(options); 
}

Simply Put, you can extend and extend and extend away

Custom Validators

Custom validators come in two flavors: Syncronouse and Asyncronous. This will be a theme as we continue

syncProprty:{
  native:Number,
  validator: function(value){
    return false;
  }
},
asyncProperty:{
  native:Number,
  validator: function(value,next){
    next(false);
  }
}
Value to Array Validation

The idea here is that you may want to compare an value to the array of values. Something like

//BAD!
arrayCompareBad:{
  native:Number,
  validator: function(value){
    return [1,2,3,4].indexOf(value);
  }
},
//Good.
arrayCompareGood:{
  native:Number,
  in:[1,2,3,4]
}

But that is slow as you create an array every time. Instead the idea is you’d be able to define it before hand

  • In – Ensures that the value is in the values
  • Not In – Ensures the value is not in the values

Now you can provide this value up front or provide it through a Syncronous or Asyncronous Function. It’s important to note that anything can use this syntax.  Enum’s have bothered me for quite some time. Any value can be compared to other values to enusre they are restricted by a certian subset. Enums used to only apply to strings however they can be applied to numbers, Buffers and yes even Arrays. Hashmaps and ObjectIds are a different beast however. Hashmaps are keys so there is no point in attempting to give it an enum. ObjectID’s require that certian ObjectId’s already exist. Now, this can be done however At that point you would need to specify a query and do it asyncronously.

Array to Array Validation
  • Any – True If At least One of Validator Array Values are present
  • All – False Unless all of the Validator Array Values are present.
  • More – True if there is more than just the Validator Array Values present
  • Not Any – False if at least Obe of the Validator Array Values are present
  • Not All – True Unless all of the Validator Array Values are present
  • Not More – False if there is more than only the Validator Array Values are present. Will also return true if empty.

You can use these like so

property:{
  native: [String],
  any:["any","of","these"],
  not_more:["any", "of", "these", "and", "no","more"]
}
Population and Save Overrides

At times you will want to Populate your data from a source other than the database. Additionally, you will want to override how that property is stored after its been validated. An example would be “User.notifications”. To duplicate the data would be absurd and if you are storing every ObjectId, you may run into numbers in the thousands. However, you can have that particular part populated on the fly.

image: {
  native: string,
  native_populate: Buffer,
  populate: function(storedvalue,next){
    fs.readFile(storedvalue,next);
  }
}

It should be noted that these aspects should probably also be available as a stream. Something like this would also work

function MyReadableStreamClass(storedvalue){
  ReadableStream.call(this);
  this.storedvalue = storedvalue;
}
image: {
  native: String,
  native_populate: Buffer
  populate: MyReadableStreamClass
}

This will create the readable stream on the fly. It should be noted that different things will populate in different ways. As a result, while this may send raw data, another might send JSON.

Now, you may be populating data, however what happens when someone wants to save something.

image: {
  native: Buffer
  depopulate: function(args,next){
    var ext = mime.findOutMimeExtension(args.name)
    var name = this.instance.id+"/image."+ext;
    fs.writeFile(name, args.buffer, function(e){
      if(e) return next(e);
      next(void(0), name); //name will be stored with the doc
    });
  }
}

//or

image: {
  type: Buffer,
  depopulate: MyWritableStreamClass
}
Digestors (Constructor Overloading) and Verbosity

Your developers may  want to be using Moment’s as dates. However, expect to be able to send a normal Javascript date as something to be stored. This is where digestors and Verbosity comes in

date: {
  native: Number,
  digestor: [function(date){
    if(date instanceof Date) 
      return date.getTime();
  }, function(time){
    if(typeof time == "number") 
      return time;
  }, function(moment){
    if(moment instanceof Moment) 
      return moment.valueOf();
  }]
}

Here we can see that any of the above values will be considered a valid number. As a result, you don’t have to worry about what you set the Date to be. If you want to always get the date as a moment

date:{
  native:Number,
  verbose:function(value){
    return moment(value);
  }
}

This will allow you to easily do whatever you want with the number without changing your database.

Schema Methods

Virtual Properties

Now, we’ve seen actual properties, but there are some properties that will not be stored but are derived from the instance itself

var schema = new Schema(validations);
schema.virtual("virtual_property", function(){
  // getter
  return this.stringA +"-"+ this.stringB;
},function(value){
  value = value.split("-");
  this.stringA = value[0];
  this.stringB = value[1];
});

Schema Indexes

The last and probably the most impotant is the indexes. With indexes. Now, Indexes are not and should never be available to a custom schematype. Additionally, because indexes can be so flexible, It brings up some interesting decisions. It’s important to note, not only can you index normal properties however you can also index virtual properties as well. Indexes available are

var schema = new Schema(validations);

schema.index("propertyname");
schema.index("uniqueproperty", "unique");
schema.index("functionalproperty", function(a,b){
  return b - a;
});
schema.index("callbackproperty", function(a,b,next){
  async.map([a,b],fs.stat,function(err,res){
    if(err) return next(err);
    next(void(0), res[0].size - res[1].size);
  });
});
Validate

Using validate is simple.

  • If the object is JSON – will validate the json
  • If the object is DOM element – will validate the dom element if its a form. If it is not, will throw an error.

This will be available from the model via Model.validate which is is basically Model.constructor.validate.bind(Model.constructor)

Finishing words

It’s important to note that the schema is simply a validator and provides database indexes. You cannot make queries with it and it essentially does nothing except provides important information for storing the data. Ideally, you will want to do as much as you can with the SchemaTypes so that you can reuse more code. And the indexes, virtual properties, what’s required and etc is Schema to Schema dependent. If you don’t like me dreaming big, well… To be honest… I believe dreaming big is part of the reason why I am here today. Because I see what I want to make and I go out and try to do it. And If I cannot, I flesh out the idea so much so that I can look back on it and say “If only”.

Forms, querstrings and Json Objects

I was going to write a long rant on jQuery’s github issues (specifically this one). However, I think I’ve been in the the programming world enough to realize a few things

  • Everyone wants a feature, no one wants to code it
  • Dismissal comes from lack of assertiveness
  • The open source world runs on voluntary slavery 😉

As a result, I don’t blame the guy for dismissing the person so easily. But I also think it’s a bit heart breaking. So much so I was going to write a big angry letter. However, instead of doing it there, I’ll do it here…

jQuery’s interactions with form’s are quite lackluster. They are so bad that in my eyes its a reasonable question why serialize and seralizeArray havn’t been deprecated. A cynical and confrontational perspective would say it’s only there for backwards compatability. But `jQuery(form).serialize()` sticks around for a better reason, it’s an easy to use useful feature to developers. jQuery in fact is not necessary at all, except for the fact: jQuery is easy to use and useful. You don’t have to think up an algorithm, it’s already there. Theres something to be said though that most of these things are just wrappers. And they are, they are just wrappers. However, there is a big difference between an easy to use .width vs an unstandardized implementation of validation. One is a very useful plugin, the other feels like it should have been done this way all along.

QueryString <-> Form

We all know about Jquery’s serialize() function (and if you don’t, it’s useful for submitting a form over ajax) which does it’s job effectively. However, what about deserialize? Theres some interesting history here, in particular I’m going to talk about this repo. About four years ago from today in 2010, the maintainer created a ticket for jQuery. As you can see, it was refused. One could say they have valid arguments. So let’s look at them.

  • Too Large – sitting at 1.4 kb, it is far from large for what it does
  • We don’t need it in core – Is jQuery’s effects needed in core? What about data? It would seem to me that jQuery should be about manipualting the dom in a simple manner
  • Not Used Everyday – This is something that is arguable. Unfortunately I could only come up with one reason since queries are only in urls. Search forms should populate based off the current query. That being said, that is an excellent reason.

JSON Object <-> Form

So who believes that form->Json would be useful?

This “serializeObject” business isn’t just some fad that will go away. It has been a problem for as long as people wanted to interact with the form before it gets submitted. They have tried to circumvent it with serializeArray and this is arguably far more efficient than serializing to a Json Object. However, there starts to arize some issues.

  • The serialized array doesn’t have any of the attributes, as a result you must then make an additional query for the input to see the what validations it has. But at that point you may be better off validating based by querying for inputs.
  • If you are looking for a specific input type, you must iterate over the objects until you find the one with the name you’re looking for. they are most likely in the same order as they were in the dom so you may be able to select it by the number. However, since disabled control’s will not be included the programmer is better off iterating

Now, going one way is possible in plugins and even without them (technically). However, how do you apply a JSON Object as form values? There is no function in jQuery that will do this. However, our deserialize plugin from before kicks ass in another area. This would have far more uses than applying a get query to a from including:

  • Setting defaults to a form
  • Setting the form based off an ajax source
  • Allowing a form to have a two way binding mechanism – this can be used in physics similations where the form would be updated based off the position, velocity, etc. However, updating the form will as a result update the physics.

JSON Object <-> QueryString

If you commonly parse query strings to json objects (which would likely only be used when applying the url to a search), you can use something along the lines of

var formobj = require("querystring")
  .parse(jQuery(form).serialize());

So long as you have browserify or an amd available. This module will also allow you to implement

var windowobj = require("querystring")
  .parse(window.location.search);

However, if you’re going to go down this route, I highly suggest you use qs. The team that backs it is funded by walmart and supports alot more than query string does.

Will change ever come?

The First thing we should ask ourselves is: Why is change necessary? The idea of it being a plugin has not perked enough ears to warrent anyone else to get fussed up over it.

Finding plugin’s waste’s time

Serialize doesn’t have a good antonym. Is it Unserialize? Is it Deserialize? It does it matter… And yet it does.

As you can see, deserialize produced the best results. Unserialize pointed toward PHP unserialize and jQuery deserialize in stack overflow and Query Forms produced the worst however I included it in as a newer developer example. Hopefully, people look for deserialize quickly. That being said, people may end up looking for a non-plugin solution…

When a 10,000 developer’s recreate the wheel…

However, I think theres something to be said for this comment here. If theres something not available in jQuery, people will generally avoid plugins if they can. This can be associated to:

  • Plugin’s are not held to the jQuery code standard
  • Plugin’s may be poorly coded
  • Plugin’s may not offer everything the developer needs

In addition, some things are simple enough that there is no need to use a plugin for them. As a result, instead of people spending 5 hours to implement 5 features, they spending 5 hours perfecting loops and queries. And even after that 5 hours, it may not be perfect. What’s more is, writing this may take a day or may take a week. As a result we lose out on valuable time for developers to solve real world problems, instead they are solving development problems. We can also look at this time wasted at scale.

  • If 1000 people rewrite “deserialize” or “serialize to Object”
  • if on average it takes a person 4 hours
  • 1000*4 = 4000 hours lost from our precious developers lives.
Is Change Necessary?

I think there is food for thought: Are form’s common to interact with or are they just better off ignoring? Do you build only the simplest tools or do you build a full fledged form handler? Why should the jQuery team Care? Supply and Demand.

What are jQueries Competitors?
  • Zepto – equalivalent form support
  • Cash – less form support
  • Minified – No form support
  • Snack – No form support
  • $dom – No form support
  • xui – No documentation, I’m not about to start looking at their code. Ain’t nobody got time for that.
  • dojo – Their documentation is hell.
  • ExtJS – Their documentation isn’t much better
  • Mootools – no form support
  • YUI – this has no form support
  • Medley – I’m not suprised
  • Perhaps you don’t need it – Don’t get crazy, I don’t feel like looking at crazy looking code.

So it would seem much more of development for developers comes from either providing a wrapper for everysingle function, following jQueries lead or making it smaller. Ok, so using different tools would be too frustrating or not provide us what we need.

Are there enough vocal demanders?

I find programmer school’s of thought interesting

  • The cool request – People requesting something because it would be a cool feature but do little to no work.
  • Not Made Here – People that write stuff just because they want to make everything in house
  • Part of the team – Once you get on a successful team, it means funding. It means everyone carries their weight. It also means a hive mind mentality where your voice will generally try to agree with others. And theres no reason to fight with other successful teams.
  • Too Bad of a programmer to have a say – Everyone else is so advanced, why should I say anything?
  • Think they are god’s gift to the earth – Think their opinion hold’s the most weight.
  • Something Else – Which you may fit in to.

2.5/5 of these people will likely not make requests. Teams will ask other teams for requests, as a result those features may be implemented because a successful team holds more weight than a single person.

However, if you think that these features should be implemented, say something. But generally death by a thousand cuts is a bad plan as it causes disorganization. If you think it should be included, search for the issue and post there. Heres a link to what I found.

es6 What I care about

Es6 is coming! If you don’t know what it is, here is a helpful slideshow to help you.  What es6 brings is a lot of sugar. However, I’m left to question, how fast is this sugar? As a result, here are the things I’m interested in.

  1.  WebRTC – The slideshow doesn’t talk about it, however webrtc is comming as well. What webrtc provides however isn’t sugar
  2. Typed arrays  – This may not seem like a big deal, however, it means that all parts of the array have a type which means speed.
  3. Proxies – This is huge actually as it allows for a “magic” object where you can make getters and setters to do whatever you want.
  4. Classes – I love Object oriented programming and I accept our new extendable overlords

What is es6 Missing?

  1. Binary Search and Insert – They are currently sorting fine but that really isn’t good enough.  We are currently implementing our own versions to compensate. However,  If firefox has taught us anything, its that its possible for a native implementation of a loop to be faster than one we make ourselves.
  2. EventEmitter – There ought to be a native implementation of an event emitter. Event Emitters are among the most used concepts in javascript and the fact there isn’t a native one just doesn’t make sense to me.

Automated Mongoose: A Red Herring?

So for many months I have been attempting to do automate the views and methods of Mongoose Schemas. However, the longer I attempt it, the further I realize how loose Mongoose can be. A small example is the SchemaTypes (Which has little to no documentation). At this Location we see an issue I and another person has had an issue with. While the maintainer isn’t very interested in fixing this, despite it existing in every other schema type (His own reasons are his, not for me to accept or deny). I have gone down a seperate route

    if(path.hasOwnProperty("caster")){
      return "Array"
    }else if(typeof path.instance != "undefined"){
      return path.instance;
    }else
      return path.options.type.name;

It’s not a huge issue, but it results in a little frustration. None the less I’m finding there are many issues with the whole scenario. And I don’t mean in terms of SchemaTypes, I mean other issues. Amoung them being…

  • Extended SchemaTypes: Urls are Essentially Strings, however they must go through a different validation process. Should I extend the String SchemaType inorder to ensure it has the same possibilities?
  • Faceted Searching: This is very important. When it comes down to finding exactly or around what you need, its nice to have a way to trim down the issues. However, each SchemaType has their own MongoDB Comparison Operators or Evaluation Operators. Of which I cannot be sure of which can be applied to which (Unless I check for Ancestry)
  • Different Properties are viewed differently: though the input may be the same, ensureing a Title is seen as a title and a url is used as an href isn’t a given. This may seem obvious, however I have been attempting for many months to ensure that there is no difference between routing and viewing. Mostly because I enjoy being a lazy dry programmer.
  • Different Models use a different Organization Pattern. This is Most apperent with Maps and Photo oriented data where nobody really cares about the text unless they click on something.
  • Almost all Models will use some sort of Auxillary Index or Model. For Example: You can have a user model. Has its name, email, password and role. Basic Stuff. But then we want to add Events. What have they created? What have they liked? etc. In addition, we want to add an index to the number of views to a particular picture but also compare those views to videos. This is where we start creating other things that are not attatched to the Original Model however the information gets appended for the views’ purposes.
  • Terms and Conditions, TourGuide-ing and Multi Page Methods. This also Is pretty important as Even though the person may have successfully authenticated. That does not mean they are good to go. However, how are we to know the next step in a multi page method?
  • User Roles. What is the best manner to document what the user’s role is, the role heirarchy and ensuring we know who can do what in the routing and the viewing.
  • Pretty URLs: Nobody wants to see /Items/3746982119433234. Its ugly, unfamiliar. People would rather see /item/best_red_rose_bouquet
  • Model Index and Root: What do we do here? Give them a preview? tell them to do X, Y and Z?
  • Aggregating Content: How do we show the content aggregated? With a Schema?

There are many issues at hand. And It leads me to further understand how nice content management systems are. Not because they are bloated. Not because they are broken. Not because they don’t offer all the features the language of your choice has to offer. Not because they have rediculous patterns for event emitting and caching. Or how they don’t like anything more than the good ‘ol post. But because they solved those problems for you. They solve them by forcing you to do it the hardway. They solve it by giving you an excuse to complain and want better. They solve it by other people getting motivated and solving the problem through “plugins” or “modules”.

I want to automate mongoose so bad. I can feel its at the tip of my finger. Barely inches away. And yet I understand, even after I’m done with the beginnings, there is so much more that you need to make sure people can do or have access to the libraries you use to do it.

Is Searching the Same as Customization?

My current project is working in drupal. I know that may sound strange as this website and many things I’ve written here are about wordpress. But this is my reality at this point. Now I and my spearhead (Describes who takes the bulk of the work [A term I and mostly only I will use]) ran into an issue. The issue was simple and yet complicated…

A  client had many many products, a definitive amount, though it will be expanded over time. Those products generally acted under the same rules with some minor variations. However, an important variation was the graphical art the consumer will see. A simplified yet ugly diagram can be scene like this

Product:{
  Sizes:{
    Big:{
      Head:{white, black, grey, color}
      BodyType:{
        Expensive:{
          CustomizationLevel:{
            Full Custom:[type1]
            Kinda Custom:[type1]
            Basic Custom:[type1]
            None:[type1]
          }
        }
        Moderate:{
          CustomizationLevel:{
            Full Custom:[type1,type2,type3]
            Kinda Custom:[type1,type2,type3]
            Basic Custom:[type1,type2,type3]
            None:[type1,type2,type3]
          }
        }
        Cheap:{
          CustomizationLevel:{
            Full Custom:[type1,type2,type3]
            Kinda Custom:[type1,type2]
            Basic Custom:[type1,type2]
            None:[type1,type2,type3]
          }
        }
      }
    }
    Medium:{
      Head:{white, black, grey}
      BodyType{
        Expensive:{
          CustomizationLevel:{
            Full Custom:[type1,type2,type3]
            Kinda Custom:[type1]
            Basic Custom:[type1]
            None:[type1,type2,type3]
          }
        }
        Moderate:{
          CustomizationLevel:{
            Full Custom:[type1,type2,type3]
            Kinda Custom:[type1,]
            Basic Custom:[type1,type2,type3]
            None:[type1]
          }
        }
        Cheap:{
          CustomizationLevel:{
            Full Custom:[type1,type2,type3]
            Kinda Custom:[type1,type2,type3]
            Basic Custom:[type1,type2]
            None:[type1,type2,type3]
          }
        }
      }
    }
    Small:{
      Head:{ grey, color}
      BodyType:{
        Expensive:{
          CustomizationLevel:{
            Full Custom:[type1]
            Kinda Custom:[type1]
            Basic Custom:[type1,type2]
            None:[type1]
          }
        }
        Moderate:{
          CustomizationLevel:{
            Full Custom:[type1,type2,type3]
            Kinda Custom:[type1]
            Basic Custom:[type1,type2,type3]
            None:[type1]
          }
        }
        Cheap:{
          CustomizationLevel:{
            Full Custom:[type1,type2]
            Kinda Custom:[type1,type2,type3]
            Basic Custom:[type1,type2]
            None:[type1,type2,type3]
          }
        }
      }
    }
    Robotic:{
      Head:{white, black, grey, color}
      BodyType:{
        Expensive1:[Glow-in-the-dark,Customized,less exp]
        Expensive2:[Glow-in-the-dark,Customized,less exp]
      }
    }
  }
}

Now, we originally tried to make everything as simple as possible by making 3 products with different customization levels and exception

Product{
  Size:Integer
  Heads:[]
  Body:{
    Expensive:[]
    Moderate:[]
    Cheap:[]
  }
}

However, we found in drupal, using DropDown Attributes did not like what we were doing. We ended up having to make many more attributes because we could not make the parent decide which values to show, only the child that was depending on the parent. As a result, we end up essentially making 50+ attributes. And on top of that have to hack in some ugly functionality as well as the client being extremely limited with the possibility of confusion being extremely high.

Now, this makes me question, why didn’t we just make 50+ products instead? The faceted search form would be extremely similar. In addition, taxonomies can have dependencies as well. However, do not have the opportunity to start coming back in time, but I know have a very interesting philosophical question: How different is Searching from Customization?

 

If we take a look at how customization works, we have a couple expectations

  1. What ever is changed, changes a net value (such as money)
  2. It is pointless to make duplicate products when two are so similar
  3. Any duplicate may clutter search results
  4. duplicates may not be able to be seen easily

So essentially the two main reasons why customization is superior is User Experience and functionality with a net cost. This makes perfect sense. Now lets consider what would happen if we use taxonomies.

Size:Integer
Head:[String] Enum[black,white,grey,color]
-Dependent on Size
BodyType:[String] Enum:[Expensive,Medium,Cheap]
-Dependent on Size
CustomizationLevel:[String] Enum:[Full,Kinda,Basic,None]
-Dependent on Body Type

Now what would end up happening is we create whatever number of products the client wants to sell, however, the much of the diagram work is out of the way. Now with taxonomies, we can make a products price dependent on the taxonomy at hand. However, we wold still end up making thousands of products. This wouldn’t be a bad thing is we could automate it though because at the end of the day there are two very important concepts

  1. The type is truly what everything is dependent on. If there is a proper file structure with types, we can make proper taxonomies
  2. The heads are generic and can be repeated (or be a customization)

This essentially reduces the expected work divide by 4 because heads can be considered generic. In addition, because we know what folders the types are from, we can easily add the appropiate taxanomies.

Does this answer the question?

Not quite. And yet it does. Essentially our “products” are really 2 nodes. 1 being very generic the other being able to be catagorized. Yes. The answer is unfortunately a mix is needed. However, this leads into a bigger question…

What if the head and bodytype are limited to a subset that can be catagorized?

The most important part here to consider is…

  1. When an Object has multiple independent parts, each of these should be considered a customization aspect
  2. When an Object has an enumerated tag, this can be considered searchable
  3. When an Object has an undefined number that will be enumerated. That should be your axis.

This “axis” concept is where most of your work will be put in. It will also be where your client will be doing most of their stuff on. Sure, maybe the client will add onto customization levels, maybe it will add another color. But probably not. As a result, Types are where you want them to enjoy 90% of the time in.

Fractals: Math, beauty and emotion

Your browser does not support Canvas :C

Timezones: The client, The server and The database

To Start out, I want to say why I’m writing this post. I have just recently been struggling with server to database interactions with date time and I had no idea why. Thought was “maybe I’m using the wrong time stamp”, “maybe I’m not setting the time to Midnight morning” then i realized “maybe my server is at a different timezone than my database”. And this was very very true.

SQL at base makes their timezones based on the servers timezone. That means if you’re in New York, running a server, it will be based off New York. If London, it will be London. This is ok, so long as your server is on the same timezone, and unfortunately for me, it doesn’t seem like it.  Wordpress has a few functions to work with such as current_time() and the seemingly undocumented option time_zone. Except its useless when it is empty, basically telling me UTC (the universal time for computers generally). PHP also returned my UTC which also doesn’t help me. In all who I hang with the most (My clean cut but simple friend wordpress and my old disorganized but extremely intelligent friend PHP) just don’t see at the same level as mySQL. So what do I do? Well, change mySQLs timezone and change my servers timezone to ensure everyone sees the same.

$wpdb-&gt;query("SET time_zone = '+0:00'");
date_default_timezone_set('UTC');

Based off this post and the php manual,  I’ve found this is the ultimate solution to my problem. I was having such problems with timezones being in different places and wordpress not helping too much that I’ve just given up and decided the proper solution is just to set everything in UTC.

This isn’t what I necessarilly want to do though. The reason for this is I’m forcing the user to experience my application in UTC. Why is this a problem? Well, user experience. However, if I’m going to change the website dependent on the users time zone, I would need to find the persons position in the world to find out what time zone they are in. And generally users don’t just give away their time zones. So how am I suppose to do this?

I make the theme have all datetime oriented aspects in my theme return in UTC and when I want to shift the date, either through xslt or jquery, I change the date based on what I recieve from javascript. It isn’t pretty. But I can make it pretty. And functionality is more important than in this case

 

Whens the Right Time?: The Transfer from Local to Production

I’ve been working on my clockin plugin for some time, and have nearly completed most of the graphs and the basic functionality, but theres a couple of things that I still need to do. So It leaves to question, when is the right time?

graphs

Well, I think it I believe, at this point in time, its project based. What are absolute necessities? For example, Even though I have the graph bases, I still need to grab the github trees and append those to the graph. I need to add each post type, when its updated to “recent posts” so they are easily accessible. There are time difference bugs where a month will show the same times for a different day then a weekday. As well as Timezone problems, allowing for projects to have appended Bug fixes and Enhancements. Showing on author page, loading all previous data from all my various locations (I’ve been working a ton, and I want to show it off)

In essence, Top Shelfing User Experience.

Then theres the error checking which will allow me to post this to the plugins like; When github Authorization for a user is taken away. Caching the pictures to promote speed. When a wordpress user is deleted, what do I do? Making sure that the dates are SEO compatible as well as graphs. When a readme is blank, not displaying one.

In essence very important things that I am not worried about right now since they don’t necessarilly all apply to me

But as I look at my fixes and enhancements, I start realizing how intertwined the plugin and theme enhancements are. So Can I truelly submit this as a plugin? Or is this just neat idea for my own personal setup? As I continue, only more enhancements arise, so when does it end? Or do I see it as a continuous project? Is wordpress the best place for this? Could I be doing it better? These are important questions I think everyone may have, and I believe working hard from working smart.

MVC as applied to wordpess and a moment of silence for XSL

So, finishing up the backbones of the Clockin Plugin, I’ve found the next steps are to see it as I had it on my old site. With a multitude of charts and such. However, the display for the clockin plugin isn’t static. In fact its quite dynamic. The entire idea of a “clock in” is its date as such it will be changing almost daily. This is ok and unfortunate for me in that it was what bogged down my load times before. I gotta do what I got to do.

Now there are a few things I’m going to be doing

  1. Add a specific ui thats just for clock_in_project
  2. Add a specific ui thats just for users, and allow them to be searchable
  3. Start Moving away from the 2014 base (while attempting to keep a lot of the great parts)
  4. Create my old charts
  5. Cache days that are not current days for easy access

But my to do this isn’t why I’m writing this blog post, the reason I’m writing this is to explain what a model view controller is and what wordpress offers.

The best way I can explain the model view controller is this:

  • Model is an object in your database
  • View is how you’ll be viewing the Model
  • Controller is directing what Model to choose and what view to choose for it
    -This can be based off: User Request, Admin Request, App Request and Default

With wordpress, our MVC is simple: we have post types, we have a theme, we have a url.
Whenever a user makes a request at the top thereurl, what we’re actually doing is making a request.
Now from my experience there are two main forms of requests, archive, singles. Archives will show you a list of items based off a query. Singles may also be based off a query, the difference is that you get full frontal information instead of a bunch of items.

Why does this matter?

Well for the clock in post type, we’re going to have to do some very special things. And Unfortunately as is the default it renders as if its just a regular post. We’re going to change that.

Its something very simple, where we can base it off of file structure. But I’d prefer not

switch (get_post_type( get_the_ID() )){
case "type_a": break;
case "type_b": break;
default: echo "ah, the good old default";
}

As simple as this we can go through and pick our view. we can also mix and match and a bunch of funky things.

One thing I want to point out is a sad sad truth about MVC, XSLT. XSLT was designed to make HTML the view component of choice. Where we would take in our model, see what we had and apply the appropriate templates. Important to note, it was all (well kinda) XML. It gave people the conceptual graduation platform to move up to. So why didn’t it work out?

XSLT has a major flaw, most Databases don’t return XML. If your database isn’t returning XML, your going to have to Transform it into XML. If you’re transforming, your wasting time. Especially when you can just use the raw data retrieved to fill in the dots. Another Major flow is that despite how awesome Xpath is, there isn’t a direct correlation from url concepts o XPath concepts.

What people need to understand is the url is the command, just as XPath is a command. And most people go from HTML, CSS, File system into PHP and Javascript where either they sink or swim. XSL and Xpath had the opportunity to make that bridge easier. But they died as other CMS’s popped up and took over. Will we see them again? One day maybe, I certainly try to use it when i can. But I highly highly doubt it

Git: Living on the Edge isn’t for the Dumb

As you may know I’ve been using git as my primary ftp, version control and way to put my projects online for public access and viewing. However, there is a very important thing when considering using these great technology: .gitignore.

Now, when using oAuth, you need a client identity and a client secret. For me, I don’t want to hard code them because that will make them publicly accessible so I stored them in a json file. I thought I followed ignoring the json by adding

^(.*)/secret.(.*)$

to the .git/info/exclude file, however I did not do it right
as ^(.*)/secret.(.*)$ is different than ^(.*)secret\.(.*)$ and quite possibly is completely incorrect

Now first i uploaded it to github, only to find that the file was still there. This through to a flurry as for the last three commits I had assumed everything was peachy clean (I am still learning everything so I don’t hate myself for it. Luckily I was able to find this tutorial on github.

Not only was I able to remove my secret from the commit, but also able to add to the .gitignore in a simple manner. My fears relaxed and a feeling for relief ensued.

Just to add to the security, I also made a proper .htaccess to hide the file

RewriteEngine On
RewriteRule ^(.*)secret\.(.*)$ /404 [L]