ORM’s Dreaming Big: pt 2 (The instance)

Before we went over the Schema which is about validation, indexes, population and schematypes. Here we’ll go into what people will be most likely using, the instance. So, what is the instance?

An Instance is

  • Holds values you wish to store or have retrieved
  • Something that can be created, requested, updated and deleted
  • Something that has properties which can match conditions

Basically an instance is the actual values you want to store or retrieve. Probably the most important part about the database as without it, well, you have nothing.

Generic yet Important Things

Callbacks

Callbacks can be implemented in one of two ways; callback(err,obj) or Promises.

Constructor(ObjectId, function(err,instance){
  if(err) throw err;
  console.log("this is our instance", instance);
});

Constructor(ObjectId).next(function(instance){
  console.log("this is our instance", instance); 
}).catch(function(err){
  throw err;
});

This is meant to support anything that you want.

Handles and Values

Instances are technically either an ObjectID handle or the Actual Values. They both have the same interface however with ObjectID’s you do not have to load all of the values nor do you have all of the values. While with an instance you do. This is to support as much IO or as little IO as you desire without having to change the interface.

Creating

Creating an instance should be as simple as creating an object in javascript

Standard – Construct and Save
var instance = new Constructor({
  property:"value"
});
instance.property2 = "value2";

instance.save(function(err){
  if(err) throw new Error("creating caused an error");
  console.log("finshed creating");
});

Now, we haven’t gotten into “Constructors” or “Models” however hopefully this sort of syntax is familiar to you. It’s simple. We want to create a new instance, so we construct it. Because values may be added or removed, the object is not saved right after creating. Additionally, its important that this is done asynchrnously. We don’t know when the file will be saved or how the file will be saved, only that the file will be saved.

Calling the Constructor – Less IO

When all of the values are already in the json object, constructing the object is a waste of time and resources.

Constructor({
  property:"value",
  property2:"value2"
}, function(err,objectid){
  if(err) throw new Error("error when creating");
  console.log("finished creating");
});

You may notice that the standard’s callback has no objectid but the create does. This is because if when you’ve succesfully saved, an ObjectID is already set for you in addition, you already have an interface to interact with the Instance. So there is no point in returning anything. While when using create, it will give you the ObjectID handle to provide you an interface to interact with. However, the handle will not have any properties in it so I would suggest you use instances if you want them.

Static Method – Obvious

In addition,  you may also call the static method. This will return the instance.

Constructor.create({
  property:"value",
  property2:"value2"
}, function(err,instance){
  if(err) throw new Error("error when creating");
  console.log("finished creating");
});

Retrieving

Generally the way retrieving will work is through the Constructors static methods. However, we are going for sugar here.

Standard – ObjectID Populating

If we have an ObjectID Handle, we can populate it into an actual Instance. It’s important to note there is a difference between an ObjectID Value and an ObjectID Handle. The value is the bytes/buffer that actually gets indexed. The ObjectID Handle has all the methods of a normal Instance. Generally all ObjectID Values will be transformed into Instances when retrieving an instance. In addition, anywhere you can use an ObjectID Value you can use an ObjectID Handle

objectidHandle.populate(function(err,instance){
  if(err) throw new Error("error in populating");
  console.log("populated the instance");
});
By ObjectID Value – Opposite
//Retreiving
Constructor(objectidValue,function(err,instance){
  if(err) throw new Error("error in retrieving");
  console.log("retrieved the instance");
});

The above is simple. We use our constructor with the object id and it will return an instance. This is the exact opposite as the initial where we send in some values to create an instance and we receive an objectID handle and this will retreive the instance based on the Handle or Value.

Static Method – Obvious
//Retreiving
Constructor.get(objectidValue,function(err,instance){
  if(err) throw new Error("error in retrieving");
  console.log("retrieved the instance");
});

Updating

Standard – save

With your Constructed/Retrieved Object it is the exact same as it was before, simply save.

//Updating
instance.property = "new value";
instance.save(function(err){
  if(err) throw new Error("updating caused an error");
  console.log("finished");
});
Static Method – Obvious

You may also update just by calling the update method of your constructor

Constructor.update(ObjectId, 
  {property:"new value"},
  function(err, instance){
    if(err) throw new Error("error when using update");
    console.log("ran update");
});

Deleting

Deleting is the last part of our crud interface here. As you might Imagine, it’s more of the same

Standard – destroy
//Updating
instance.destroy(function(err){
  if(err) throw new Error("destroying caused an error");
  console.log("finished");
});
Static Method – Obvious

You may also update just by calling the update method of your constructor

Constructor.destroy(ObjectId, function(err, instance){
    if(err) throw new Error("error when using update");
    console.log("ran update");
});

 Property Setting and Getting

Digestors and Verbose

All properties on an instance are actually getter and setter functions.

Object.defineProperty(instance, "propertyname", {
  get: function(){
    return Schema.propertyname.verbose(
      instance._rawValues.propertyname
    );
  },
  set: function(v){
    var ds = Schema.propertyname.digestors;
    var l = ds.length;
    var vv = void(0);
    for(var i=0;i<l;i++){
      vv = ds[i](v);
      if(typeof vv != "undefined") break;
    }
    if(i===l){
      throw new Error("cannot digest ",v);
    }
    instance._rawValues.propertyname = vv;
  }
});

For the getter we are returning the verbose value. For the Setter, we are digesting the value to its raw type.

Marking Dirty Properties and resetting

The first thing that can be done is to mark dirty properties. This also ties in with the way digesters and getters. In addition when setting, we also set which properties are dirty. This is done so that only the dirty values are actually updated

set: function(v){
    var vv = Schema.property.digest(v);
    if( vv == instance._initvalues.property ){
      delete instance._dirty.property
    }else{
      instance._dirty.property = vv;
    }
    instance._values.property = vv;
   }

Instance.prototype.save = function(cb){
  return Instance.update(this.id,this._dirty,cb);
}

Instance.prototype.reset = function(){
  for(var i in this._dirty){
    if(this._dirty[i]){
      this._dirty[i] = false;
      this._values[i] = this._initvalues[i]
    }
  }
}

 

Resync with the sender

At times you may want to ensure your instance is the exact same as the one on the database or the location that sent you the instance. All that needs to be done is resync

Instance.prototype.resync = function(cb){
  var _this = this;
  Instance(this.id,function(err,values){
    if(err) return cb(err);
    for(var i in values){
      _this[i] = values[i];
    }
    cb(void(0), _this);
  });
}
Listening for Updates

You may also use an event emitter that sends “update” event with property name and value

Instance.prototype.syncTo = function(ee){
  var _this = this
  ee.on("update",function(prop,val){
    _this[prop] = val;
  })
};

Dom and QueryString Interactions

And of course, you will need some dom interactions. Idealy, I would use an available library such as qs and serializeObject and deserialize to ensure I don’t mess up. From there I would properly set either the values in the object or the values in the query string or form. In addition its also possibly to bind the Instance to the form by using syncTo.

That is the instance

Perhaps there is too much sugar here. Perhaps not enough in the right areas. I’ve considered streaming as an alterantive plan however, In the end, I believe a simple API is a good api. Perhaps I should prioritize a bit. What is for sure in is all of the static methods, ObjectID.populate, instance saving, instance destroying and using the Constructor, well as a constructor. In addition, the dom/querystring aspects is pretty important since without it we’re back at square one: A decent ORM with refusal to believe the DOM or urls that don’t use JSON exist. Everything else is a bit up in the air.

ORM’s Dreaming Big: Pt 1 (Schema)

So I’ve came across waterline. Waterline is unique in that it isn’t just an interface to sql or mongodb but anything.* Anything*. (*Anything that someone has made an adapter for it). Unfortunately my experiences with it have led me to believe it is not a good fit for a full stack outside of sails. I’ve written more about my issues with the framework here. Now, I’m aksing for a lot therre so I don’t expect it to go anywhere. But I made me have a three day coding sprint of trying to develop my own ORM the way I wanted it. As I continued, I realized it is a ton of work alone and not all of the things I desire are available to mooch off of. But the Idea… The Dream… That can live on…

What would it be used for?

This is one of the reasons why I wrote this post. Waterline is awesome. However, It left me wanting more and questioning if supporting too much is giving me less. As a result, the interfaces I believe are of upmost importance to provide compatibility with is.

  • Memory – This was a proof of concept by them, however it can be implemented in a fast manner. Libraries like Lazy.js will compile all the arguments so it s only ran in a single loop. Additionally, supporting proper indexes can add even more speed to it. However, the point of being able to use in memory is so that you can create “collections” easily, beautifull and query them as you would anything else
  • LocalStorage – This is another clientside feature that can be implemented. LocalStorage is an interesting beast but nonetheless quite manageable. WHat you would do is store each everything to start with connection/model where connection and model would be the connection and model name. From there you would store indexes under connection/model/indexName and probably each object in its own place such as connnection/model/ObjectID. This will allow you to not load too much at once and be able to asyncrnously retrieve objects as you need them instead of loading everything into memory and hoping all goes well.
  • HTTP- By providing a wrapper to create HTTP calls, you can interface with your database easily as if it was again on the server. Of course a serverside implementation is also necessary, however I think that’s relatively simple in the long run. Perhaps that begs the question of creating “Users” that can interface with your ORM.
  • FileSystem – Mongodb is the standard, without a doubt. However, I’m a strong believer in diversity (when it’s convenient to say I am). As a result, creating a filesystem document based framework doesn’t see too far off or out of line. It would most likely be quite similar to localStorage actually
  • MongoDB – Mongo in many ways provided the breakthrough db. Might as well still be able to interface with it

Sugery Snacks

The Validator

The validator Is without a dote one of the most important features to any database. The purposes of a validator are not just for the databases purposes but also for the clientside. When creating a form, being able to just hook in and apply a validator is without a doubt one of the sweetest things possible. Unfortunately, those validation parameters generally are not included in the database as well. This is partly a good thing since you don’t users to see all of your internals, however for some code rewriting your models is kind of a pain in the ass. Just tedious work. As a result, here is the first commandment

  • A Validator can also be easilly hooked into any form
  • A Validator can also easily be used to generate forms

The second part is obviously much more complicated as you can see here and here.  But we’re dreaming big here right? No holds bar. Whatever we damn well please. And I would be pleased to not have to rewrite code for every single form I ever come up with.

As for Creating it, Here is a Laundry list of features….

“Native” Types

The Native Types I would prefer to keep as simple as possible

  • Number
  • Buffer
  • String
  • JSON
  • Any
  • Typed ObjectID – Can specify a specific Model(s) allowed.
  • Typed Array – Can Specify the Type of Array it will be (Any is also allowed)

The reason long, date and others are not supported is because those will end up being compiled to these native types anyway. The Object Id is the only thing that’s really different. Numbers, HashMaps and ObjectIds are the only thing that cannot be evaluated to an array.

to use these

prop1:Number, //You can provide the object class
prop2:"buffer" //You can provide a string specifying the type
prop3: ["string"] // You can create a typed array
prop4: {
  native:Object //you can specify the type explicitly
}
prop5: Framework.Types.ObjectID //This specifies any other document
prop6: "objectid:modelname" //This specifies that you expect it to use another model
prop7: AModelClass, //This specifies that you expect to use that other model. This will be the same as above
prop8: {
  native: "objectid"
  model: "modelname" //Specify the type explicitly
}
prop8: null, //Specifies Anything
prop9: FrameWork.Types.Anything //Specifies AnythingAswell
Additional Types

You may also use custom schematypes. However, schematypes will not have all the features that a validator expects. In addition, you may also provide anything that you would write in a custom schematype within the schema as well.

To use a custom schematype

prop1: {
  native: CustomSchemaTypeClass
}
prop2: {
  native: "customschematypeclass"
}

If you provide a string, your validator will create a dependency on that schematype. This means if that schematype is not available in your framework, until it is your model cannot be used. Below cannot be used with Custom SchemaTypes and only available to the Schema. Additionally, you can provide custom options that will override the SchemaTypes original Options

prop1: {
  native: "customschematypeclass",
  a_custom_option: "a value"
}
prop2: {
  native: CustomSchemaTypeClass({
     a_custom_option: "a value"
  })
}
Basic Validators
  • Required – Cannot be null or undefined
  • Final – After first created, cannot be set again
  • Unique – this will also create an index.

These are simple and straight forward

Default

At times you may want to provide a default. And now you can in three different ways

property:{
  native: [String],
  default:["value"],
}
property2:{
  native: [Number],
  default:function(){
    return Math.random();
  }
}
property:{
  native:[ObjectID]
  default:function(next){
    Query().find({something:value}).exec(next);
  }
}

 Validators Available in Custom SchemaTypes

The following are available to use within your Schema as well as SchemaTypes. Schematypes are interesting in that they can be extended indefinitely however only return the Schematype. THis is done because of the following

function CustomSchemaType(options){
  if(!(this instanceof CustomSchemaType)) return new CustomSchemaType(options);
  this.options = options
}

CustomSchemaType.prototype = function(options){
  options = _.merge(this.options,options);
  return new this.constructor(options); 
}

Simply Put, you can extend and extend and extend away

Custom Validators

Custom validators come in two flavors: Syncronouse and Asyncronous. This will be a theme as we continue

syncProprty:{
  native:Number,
  validator: function(value){
    return false;
  }
},
asyncProperty:{
  native:Number,
  validator: function(value,next){
    next(false);
  }
}
Value to Array Validation

The idea here is that you may want to compare an value to the array of values. Something like

//BAD!
arrayCompareBad:{
  native:Number,
  validator: function(value){
    return [1,2,3,4].indexOf(value);
  }
},
//Good.
arrayCompareGood:{
  native:Number,
  in:[1,2,3,4]
}

But that is slow as you create an array every time. Instead the idea is you’d be able to define it before hand

  • In – Ensures that the value is in the values
  • Not In – Ensures the value is not in the values

Now you can provide this value up front or provide it through a Syncronous or Asyncronous Function. It’s important to note that anything can use this syntax.  Enum’s have bothered me for quite some time. Any value can be compared to other values to enusre they are restricted by a certian subset. Enums used to only apply to strings however they can be applied to numbers, Buffers and yes even Arrays. Hashmaps and ObjectIds are a different beast however. Hashmaps are keys so there is no point in attempting to give it an enum. ObjectID’s require that certian ObjectId’s already exist. Now, this can be done however At that point you would need to specify a query and do it asyncronously.

Array to Array Validation
  • Any – True If At least One of Validator Array Values are present
  • All – False Unless all of the Validator Array Values are present.
  • More – True if there is more than just the Validator Array Values present
  • Not Any – False if at least Obe of the Validator Array Values are present
  • Not All – True Unless all of the Validator Array Values are present
  • Not More – False if there is more than only the Validator Array Values are present. Will also return true if empty.

You can use these like so

property:{
  native: [String],
  any:["any","of","these"],
  not_more:["any", "of", "these", "and", "no","more"]
}
Population and Save Overrides

At times you will want to Populate your data from a source other than the database. Additionally, you will want to override how that property is stored after its been validated. An example would be “User.notifications”. To duplicate the data would be absurd and if you are storing every ObjectId, you may run into numbers in the thousands. However, you can have that particular part populated on the fly.

image: {
  native: string,
  native_populate: Buffer,
  populate: function(storedvalue,next){
    fs.readFile(storedvalue,next);
  }
}

It should be noted that these aspects should probably also be available as a stream. Something like this would also work

function MyReadableStreamClass(storedvalue){
  ReadableStream.call(this);
  this.storedvalue = storedvalue;
}
image: {
  native: String,
  native_populate: Buffer
  populate: MyReadableStreamClass
}

This will create the readable stream on the fly. It should be noted that different things will populate in different ways. As a result, while this may send raw data, another might send JSON.

Now, you may be populating data, however what happens when someone wants to save something.

image: {
  native: Buffer
  depopulate: function(args,next){
    var ext = mime.findOutMimeExtension(args.name)
    var name = this.instance.id+"/image."+ext;
    fs.writeFile(name, args.buffer, function(e){
      if(e) return next(e);
      next(void(0), name); //name will be stored with the doc
    });
  }
}

//or

image: {
  type: Buffer,
  depopulate: MyWritableStreamClass
}
Digestors (Constructor Overloading) and Verbosity

Your developers may  want to be using Moment’s as dates. However, expect to be able to send a normal Javascript date as something to be stored. This is where digestors and Verbosity comes in

date: {
  native: Number,
  digestor: [function(date){
    if(date instanceof Date) 
      return date.getTime();
  }, function(time){
    if(typeof time == "number") 
      return time;
  }, function(moment){
    if(moment instanceof Moment) 
      return moment.valueOf();
  }]
}

Here we can see that any of the above values will be considered a valid number. As a result, you don’t have to worry about what you set the Date to be. If you want to always get the date as a moment

date:{
  native:Number,
  verbose:function(value){
    return moment(value);
  }
}

This will allow you to easily do whatever you want with the number without changing your database.

Schema Methods

Virtual Properties

Now, we’ve seen actual properties, but there are some properties that will not be stored but are derived from the instance itself

var schema = new Schema(validations);
schema.virtual("virtual_property", function(){
  // getter
  return this.stringA +"-"+ this.stringB;
},function(value){
  value = value.split("-");
  this.stringA = value[0];
  this.stringB = value[1];
});

Schema Indexes

The last and probably the most impotant is the indexes. With indexes. Now, Indexes are not and should never be available to a custom schematype. Additionally, because indexes can be so flexible, It brings up some interesting decisions. It’s important to note, not only can you index normal properties however you can also index virtual properties as well. Indexes available are

var schema = new Schema(validations);

schema.index("propertyname");
schema.index("uniqueproperty", "unique");
schema.index("functionalproperty", function(a,b){
  return b - a;
});
schema.index("callbackproperty", function(a,b,next){
  async.map([a,b],fs.stat,function(err,res){
    if(err) return next(err);
    next(void(0), res[0].size - res[1].size);
  });
});
Validate

Using validate is simple.

  • If the object is JSON – will validate the json
  • If the object is DOM element – will validate the dom element if its a form. If it is not, will throw an error.

This will be available from the model via Model.validate which is is basically Model.constructor.validate.bind(Model.constructor)

Finishing words

It’s important to note that the schema is simply a validator and provides database indexes. You cannot make queries with it and it essentially does nothing except provides important information for storing the data. Ideally, you will want to do as much as you can with the SchemaTypes so that you can reuse more code. And the indexes, virtual properties, what’s required and etc is Schema to Schema dependent. If you don’t like me dreaming big, well… To be honest… I believe dreaming big is part of the reason why I am here today. Because I see what I want to make and I go out and try to do it. And If I cannot, I flesh out the idea so much so that I can look back on it and say “If only”.

Automated Mongoose: A Red Herring?

So for many months I have been attempting to do automate the views and methods of Mongoose Schemas. However, the longer I attempt it, the further I realize how loose Mongoose can be. A small example is the SchemaTypes (Which has little to no documentation). At this Location we see an issue I and another person has had an issue with. While the maintainer isn’t very interested in fixing this, despite it existing in every other schema type (His own reasons are his, not for me to accept or deny). I have gone down a seperate route

    if(path.hasOwnProperty("caster")){
      return "Array"
    }else if(typeof path.instance != "undefined"){
      return path.instance;
    }else
      return path.options.type.name;

It’s not a huge issue, but it results in a little frustration. None the less I’m finding there are many issues with the whole scenario. And I don’t mean in terms of SchemaTypes, I mean other issues. Amoung them being…

  • Extended SchemaTypes: Urls are Essentially Strings, however they must go through a different validation process. Should I extend the String SchemaType inorder to ensure it has the same possibilities?
  • Faceted Searching: This is very important. When it comes down to finding exactly or around what you need, its nice to have a way to trim down the issues. However, each SchemaType has their own MongoDB Comparison Operators or Evaluation Operators. Of which I cannot be sure of which can be applied to which (Unless I check for Ancestry)
  • Different Properties are viewed differently: though the input may be the same, ensureing a Title is seen as a title and a url is used as an href isn’t a given. This may seem obvious, however I have been attempting for many months to ensure that there is no difference between routing and viewing. Mostly because I enjoy being a lazy dry programmer.
  • Different Models use a different Organization Pattern. This is Most apperent with Maps and Photo oriented data where nobody really cares about the text unless they click on something.
  • Almost all Models will use some sort of Auxillary Index or Model. For Example: You can have a user model. Has its name, email, password and role. Basic Stuff. But then we want to add Events. What have they created? What have they liked? etc. In addition, we want to add an index to the number of views to a particular picture but also compare those views to videos. This is where we start creating other things that are not attatched to the Original Model however the information gets appended for the views’ purposes.
  • Terms and Conditions, TourGuide-ing and Multi Page Methods. This also Is pretty important as Even though the person may have successfully authenticated. That does not mean they are good to go. However, how are we to know the next step in a multi page method?
  • User Roles. What is the best manner to document what the user’s role is, the role heirarchy and ensuring we know who can do what in the routing and the viewing.
  • Pretty URLs: Nobody wants to see /Items/3746982119433234. Its ugly, unfamiliar. People would rather see /item/best_red_rose_bouquet
  • Model Index and Root: What do we do here? Give them a preview? tell them to do X, Y and Z?
  • Aggregating Content: How do we show the content aggregated? With a Schema?

There are many issues at hand. And It leads me to further understand how nice content management systems are. Not because they are bloated. Not because they are broken. Not because they don’t offer all the features the language of your choice has to offer. Not because they have rediculous patterns for event emitting and caching. Or how they don’t like anything more than the good ‘ol post. But because they solved those problems for you. They solve them by forcing you to do it the hardway. They solve it by giving you an excuse to complain and want better. They solve it by other people getting motivated and solving the problem through “plugins” or “modules”.

I want to automate mongoose so bad. I can feel its at the tip of my finger. Barely inches away. And yet I understand, even after I’m done with the beginnings, there is so much more that you need to make sure people can do or have access to the libraries you use to do it.