6.0.4 - ??? ??? ?? ????

- Update runTests.py from 2.1.0 to 3.0.5. See
https://github.com/kata198/GoodTests for more details.

6.0.3 - Tue May 23 2017

- Try to make deepcopy, if possible, when setting/fetching values to _origData
(that is used to determine if a field has changed and needs to be updated on a
save). An example would be a stored json with a inside. If it was updated
externally, it would not be seen as an update. Now it will.

6.0.2 - Wed May 17 2017

- Update a bunch of docstrings
- Fix "valueType" missing on IRForeign*LinkField .copy and .getReprProperties
methods


6.0.1 - Wed May 17 2017

- Fix IRForeignLinkField not showing up in __all__ in fields.foreign, which
caused that field to not show up in pydoc


6.0.0 - Tue May 16 2017

Foreign Link Support! (Link/Fetch/Save objects related to other objects, like
foreign keys)

Also, if upgrading from <5.0.0, see CONVERTING_TO_5.0.0
and ChangeLog for possibly backwards-incompatible changes made in 5.0.0.

This release has no backwards incompatible changes.


- Add an IRForeignLinkField which allows linking another object by primary
key. You can assign it an instance of another model or a primary key, and upon
access if not already fetched the parent model will fetch the foreign object
and return it.

- Add IRForeignMultiLinkField which links one object to a list of other
objects.

^ Now that foreign fields are enabled, you can do a 1:1 conversion of your SQL
models into IndexedRedis and use them the same way. You CAN get better by
designing it for IR, but you WILL get about 5 to 15 times the performance at
least just by switching the engine.

- Add "cascadeSave" option to saving functions. This flag (default True) will
cause all foreign link objects to be saved (inserted or updated), recursively.

- Add "cascadeFetch" option to fetching functions. This flag (default False)
will cause all foreign link objects to be fetched right away, recursively. By
default, they are fetched on first-access.

- Add "cascadeObjects" to comparison functions. This controls whether the
values on the foreign objects are considered, or if just the pk relationship
is considered.

- Add "cascadeObjects" to reload function. This controls whether if any
foreign objects have changed values, we reload them, or only if the pk was
changed.

^ See "Foreign Links" section in README for details now.

- Some performance improvements

- Add "diff" method to IndexedRedisModel, to compare the values on two
objects, and return a dict of "changedField" : ( valueOnFirstObj,
valueOnSecondObj )


- Introduce "optional patches", as root-dir "patches" folder. These are
optional patches which may match your situation, but won't be included in
mainline for some reason.

There's a README in that dir which will list a brief explanation of
each optional patch, with a more descript in the comment field of the unified
diff in the same directory.


5.0.2 - Sat May 13 2017
- Fix where PyDoc could fail to create docs for projects importing
IndexedRedis in some circumstances
- Ensure that getPrimaryKeys function always returns a list and not sometimes
a set (because of upstream Redis package)


5.0.1 - Mon May 1 2017

A conversion document has been created, CONVERTING_TO_5.0.0 in the main
directory of this package. Also available online, at 
  https://github.com/kata198/indexedredis/blob/5.0branch/CONVERTING_TO_5.0.0

Use this document to prepare your code for the changes in IndexedRedis 5.0.0.
Also please review this Changelog to see new features, bug fixes, etc which
will not be listed in the conversion document.

- Improve IRPickleField to work with ALL types (including now, strings and
bytes)

  THIS WAS PREVIOUSLY BACKWARDS-INCOMPATIBLE, BUT HAS BEEN MADE COMPATIBLE
  WITHOUT CHANGES TO YOUR CODE. **DO NOT RUN THE COMPAT FUNCTION SHIPPED WITH
  4.1.3 - 4.1.4**


- Fixup the way connecting to Redis works. Now, instead of having to define
REDIS_CONNECTION_PARAMS on each model, you can call
"setDefaultRedisConnectionParams" to set a default connection configuration.
This configuration will be used on all models unless REDIS_CONNECTION_PARAMS has been
defined and is not empty.

- Rename "connect" method to "connectAlt" on IndexedRedisModel (to better
convey what it does). Also fix it up to better support multiple alternate
connection models

- Remove formerly deprecated IndexedRedisModel.BASE64_FIELDS - You've been
warned for a while now to use fields.IRField.IRBase64Field
  **BACKWARDS INCOMPATIBLE**
  To convert, remove the BASE64_FIELDS array from your model. For any items
  that are in it, change it in the FIELDS array to be:
    IRBase64Field
  as found in
    from IndexedRedis.fields import IRBase64Field
  
  If you already have a type, and were using it to base64 encode that type
  (like maybe you are compressing then base64 encoding), use a IRFieldChain
  with base64 as the last element to retain same functionality.

  Like:
     IRFieldChain('fieldName', [IRUnicodeField(encoding='utf-16'),
	 IRBase64Field()])

  Models with BASE64_FIELDS defined will generate an error upon validation.

- Remove IndexedRedis.BINARY_FIELDS - Use the new IRBytesField to act the same
way as a BINARY_FIELD did on python3 or python2, use IRField(valueType=bytes)
[python 3 only], or use IRRawField for no encoding/decoding at all. You can
combine these in a chain just like IRBase64Field.
  **BACKWARDS INCOMPATIBLE**

  Models with BINARY_FIELDS defined will generate an error upon validation.

- Make copy.copy and copy.deepcopy on an IndexedRedisModel call
IndexedRedisModel.copy(False), in order to copy the data, but NOT to have the
copy be linked to the database. A save on the copy will be an insert, the
parent will remain an update.
  **BACKWARDS INCOMPATIBLE**

- Removed the old, deprecated, compat global "defaultEncoding", retaining just
defaultIREncoding. Also, removed deprecated getEncoding and setEncoding, use getDefaultIREncoding and setDefaultIREncoding
  **BACKWARDS INCOMPATIBLE**

  You shouldn't notice this. You should be using setDefaultIREncoding or
  getDefaultIREncoding for default encodings

- Introduce a new IRField method called "fromInput", which is like
"convert", but is used when the input is guarenteed to NOT come from storage.
This will simplify a lot of things, remove some conditional and weird garbage,
etc. Constructing objects is now a lot cheapear.


- Store "was validated" result for model by the class itself. This allows models with the same keyname (like for conversion, tests, etc) to be validated for each unique class.

- Add IRClassicField as a replacement for the plain-string name in FIELDS array. validateModel will automatically convert these to IRClassicField and warn you that these may go away. I have not yet decided if 5.0.0 will generate error or just warning for now.

- Default IRField's which have not been set to irNull (instead of empty string).
Classic string fields (just a string name, not an IRField object) will retain
the empty string.
This allows there to be a difference from not-set and empty-string value items
(which can be meaningful in several contexts).

You can override this by setting defaultValue='something else' (like empty
string for previous default on some types, irNull was previous default on
others)
  **BACKWARDS INCOMPATIBLE**

  Consider if you are testing like
  if myObj.someField == ''
  that this will stop working on newly created objects if you aren't setting
  the value, and expecting that to test such a condition. You can use:
    if not myObj.someField
  or
    if (myObj.someField in ('', b'', irNull)
  to support both.

  There is no "compat convert" method for this, because it's unknown if an
  item was set to default string, or empty value. It's on you to make sure
  your code either handles both, or you convert your datasets accordingly. 

  You can override the defaultValue by passing defaultValue=X to any IRField
  or subclass, so to retain old behaviour set defaultValue='' on your fields.

- Allow specifying a defaultValue on IRFields. This specifies what the default
value will be (was not set) for each field. Every IRField type (Except
IRClassicField) supports specifying a default value.

- repr on FIELDS now shows the field type and any properties specific to that object. asDict now takes a new param, (strKeys, default False), which will str the keys instead of using the IRField (so if you are pretty-printing the asDict repr, you'll want to use this to prevent the field types from creeping into the printout)

- Use "fromInput" every time a field is set on an IndexedRedisModel
( via myModel.xfield = someVal ), same as if it were done via the constructor.
This ensures that everything always has an expected type, and can fix some
weird issues when folks set unconverted values.
  ** Backwards Incompatible, if you were doing it wrong before... ;) **

- Rename "convert" to "fromStorage" on IRField. Instead of implementing
convert/toStorage/fromInput, the subclasses should implement the
underscore-prefixed version,

  * _fromStorage
  * _fromInput
  * _toStorage

The non-underscore prefixed versions have been reworked to handle irNull and
such.

- Change the way irNull is handled, pull pretty well all of the work and
flip-flopping out of IndexedRedisModel and make it part of IRField

- Ensure that "toIndex" is ALWAYS called when setting indexes. This fixes some
issues with hashIndex=True (like that irNull would not previously hash).

- Fix some cases where irNull would equal empty string on indexes


- Work around a strange "bug" in python that could cause some builtin types to
be overriden within the "fields.__init__" module

- Make IRBytesField able to index (by hashing the bytes)

- Force IRUnicodeField to always have a hashed index, when used as an index
  **BACKWARDS INCOMPATIBLE**
    If you were using an IRUnicodeField as an index before, you will need to
	reindex your data. This is because without hasing, there are issues with
	many codecs at obtaining the index key. But it's okay, it's simple to
	reindex and move forward.

	For any model that has an index on an IRUnicodeField, you must call:

	MyModel.objects.compat_convertHashedIndexes()

    This existing method will take care of ensuring that ALL fields are using
	the proper index format, be it hashed or unhashed, and even takes care of
	if you accidently let your application run and have a mix of both hashed
	and unhashed values within the index.

	Please ensure that your application is offline before executing this
	function, for each model that may need index conversion.


- Update runTests.py to the latest provided with the new version of
GoodTests.py

- Some minor to moderate cleanups/fixups/optimizations

- Add pprint method to both IndexedRedisModel and IRQueryableList. This will
pretty-print a dict representation of the model/collection of models, to a
given stream (default sys.stdout)

- Add __eq__ and __ne__ to IndexedRedisModel, to test if the models are equal
(including ID).

- Add "hasSameValues" method to IndexedRedisModel, to test if two objects have
the same field values, even if the _id primary key is different (i.e.
different objects in the database)

- Fix where on insert the forStorage data would be put in _origData instead of
the converted data, which would cause that field to be listed as needing to be
updated if that same object was saved again

- Fix a strange bug where some builtin types could be overriden in the
IndexedRedis.fields.__init__.py file

- Allow specifying an explicit encoding on IRBytesField, to use an alternative
than the default encoding

- Make IRField's copyable

- Add "copyModel" method to IndexedRedisModel as a class method which will
return a copy of the model class. This way when converting field types etc you
can easily have the same model, copy it, and change one.

- Add "name" property to IRField to make it more obvious how to access ( you
can still use str(fieldObj) )

- Improve compat_convertHashedIndexes method to work with all field types, and
to only touch fields which can both hash and not hash

- Update IRFixedPointField to ensure the value is always rounded to
#decimalPlaces property, not just after fetch from storage.

- Ensure IRRawField always actually does no encoding/decoding, does not support
irNull (well, it just treats it same as empty string)

- Ensure that IRBytesField always encodes to bytes as soon as you set the
value on the object

- Ensure that every field type converts the value to its consumable form (so
same before-save and after-fetch) when setting the attribute through the
dot-operator (myobj.value = X ) or constructor.

- Fix bug where toIndex could be calling toStorage twice on a value (which
affected things like base64 fields, which could be base64-of-base64 on index).
Didn't matter before because base64 fields couldn't index before, but maybe
some case hit this bug.

- Allow IRFieldChain to index if the right-most field in the chain supports
indexing

- Added many more tests, like for each field type, lots more usage scenarios,
etc.

- Make IRCompressedField indexable.

The only non-indexable fields now are
IRPickleField (Different py2 vs py3 repr), IRRawField (because, no
encoding/decoding), and IRField(valueType= (float or dict or json) ) (float because
floats are machine-dependent, use IRFixedPointField to define a precision and
support indexes. And dicts are not ordered. And json has too many nesting
issues. )

- Support "lzma" ( aka "xz" ) compression in IRCompressedField. This is
provided by the core in python3. For python2, IndexedRedis also supports
backports.lzma and lzmaffi as alternative implementations. You can also
explicitly set IndexedRedis.fields.compressed._lzmaMod if you have some other
external implementation for python2

- Some cleanups, refactoring, optimizations, simplifications

- Rename the fields.bytes fields.unicode and fields.pickle module names to not
conflict with builtins. You should be inheriting all your fields like:

   from IndexedRedis.fields import IRPickleField
anyway, so should not notice this change.

- More updates on the way Redis connection parameters work.
    
    Now -- instead of using the "fixed" defaults of host="127.0.0.1",
    port=6379, db=0 for connections, IndexedRedis will use what you set
    through setDefaultRedisConnectionParams. The old defaults remain the
    initial version of the defaultRedisConnectionParams, so existing code
    will not break.
    
    This means now that models that define their own REDIS_CONNECTION_PARAMS
    are filled in by the defaultRedisConnectionParams for any missing slots,
    rather than the aforementioned hard defaults.
    
    So for example, for the following:
    
        setDefaultRedisConnectionParams( { 'host' : '192.168.1.1' , 'port' :
        15000, 'db' : 0 } )
    
        class MySpecialModel(IndexedRedisModel):
            ...
    
            REDIS_CONNECTION_PARAMS = { 'db' : 1 }
    
    The default connection for everything will use the first dict
    (host='192.168.1.1', port=15000, db=0 ).
    
    MySpecialModel will override the db portion, but inherit host and port,
    so MySpecialModel will use (host='192.168.1.1', port=15000, db=1)
    
    This only affects connection pools that IndexedRedis manages. If you
    define your own (by setting the 'connection_pool' key on your connection
    params), as before you will not get any of the inheritance,
    connect/disconnect, size management, etc that IndexedRedis would
    otherwise provide.
    
    Some other changes because of this:
    
    - getDefaultRedisConnectionParams will return a COPY of the default
    params, to prevent updates (such that IndexedRedis can track and apply
    them properly). setDefaultRedisConnectionParams already made copies of
    the values and set them on the global instance.
    
    - Introduce as "clearRedisPools" method which will disconnect and clear
    the connection_pool ConnectionPools off of all non-managed connections.
    
    - Calling setDefaultRedisConnectionParams will now call clearRedisPools,
    disconnecting any active managed connections and clearing the cached
    connection_pool associated with each model (except managed pools). This
    allows inheritance to happen again with the new properties, for example
    on the above example code, if 'host' was changed in the global,
    MySpecialModel on next connection will inherit that value (and cache the
    inherited settings and connection pool).

4.1.4 - Sat Apr 15 2017

- Update READMEs a bit

4.1.3 - Wed Apr 12 2017

** IMPORTANT -- UPGRADE TO THIS, ADJUST YOUR CODE IN DEV TO PREPARE FOR 5.0.0
**

- Add IRBytesField to work the same as BINARY_FIELDS on both python2 and 3.
BINARY_FIELDS is going away in python 5.0.0

- Add the new Pickle field implementation from 5.0.0 (which supports pickling
ALL types, but is not compatible). This is named IRNewPickleField and can be
found in
    from IndexedRedis.fields.new_pickle import IRNewPickleField

You can change your models to use this field, then on each model call
    myModel.compat_convertPickleFields()

to convert the data form the old form to the new form.

Please see the 5.0.0 preview release also released today.

At 4.1.3 you can
make all your code forwards-compatible (with the exception of pickle fields,
which need to be converted post-update)

Several things are changing backwards-incompatible, but if you start making
changes now, you can run on 4.1.3 and then upgrade to 5.0.0 without issue.


4.1.2 - Wed Apr 12 2017

- Fix some potential encoding issues on python2
- Fixup some potential issues with IRBase64Field on python2 ( keep in mind,
BASE64_FIELDS is going away in IndexedRedis 5.0.0! )
- Some minor performance updates in encoding/decoding paths


4.1.1 - Wed Apr 12 2017

- Ensure that the "origData" structure gets updated after a save which
performs an insert. It was previously only updated after an "update", but not
an insert. This prevents those fields as being seen as changed and updated
again if you save the inserted object.
- Update "runTests.py" to be version 1.2.3 from GoodTests.py



4.1.0 - Mon Jan 9 2017
- Fix issue where fields that implemented toStorage (complex IRFields) would sometimes have problems
doing updates
- Fix issue where field values that work as-reference (like lists) would be seen to not update. We now make
a copy in the "_origData" dict, if possible. If doesn't support copy.copy, we fallback to value assignment.
- Add tests, specifically for IRPickleField with lists where these issues came up


4.0.2 - Fri Nov 18 2016
- Update link to pypi to be pythonhosted.org

4.0.1 - Fri Nov 18 2016
- Update pydoc

4.0.0 - Fri Nov 18 2016 INCOMPLETE CHANGELOG - TODO UPDATE

NOTE - This is not a complete changelog, but I haven't had time, and
IndexedRedis 4.0 has sat stable and idle for 8 months! It is completely
backwards compatible with IndexedRedis 3.0, and there are plenty of code
examples in the "tests" directory to get you started on indulging in all the
great new features. <3 <3

- Introduce "IRField" as a type, wherein you can define field properties
within the field itself. See examples.py track_number as example.
- Implement basic client-side type conversion after fetching. The IRField
has a valueType, which, when provided with a type, will convert the value
to that type. When "bool" is provided, it will accept "1", "True" (case
insensitive) as true, 
and "0", "False" (case insensitive) as false, and raise exception for
other values.
- Add IRNullType which will be used for non-string null values (like if a
valueType=int field has never been set, it will be returned as irNull.
irNull only equals other IRNullTypes, so it can be filtered out and is
different from int(0) for int fields. IRNull != '' or any other False
type, except another IRNull.

	So like:
	from IndexedRedis import irNull
	...

	reallyZero = myObjs.filter(age__ne=irNull).filter(age__eq=0)

- Split up some functionality across a couple files
- Introduce an AdvancedFieldValueTypes module, which contains an example
advanced field, for a datetime implementation. Pass this as the
"valueType" on an IRField to have it automatically convert to/from a
datetime. This stores a string on the backend, and is slightly less
efficent than using a property and storing an integer, but more
convienant.
- Introduce an IRPickleField, use this in place of IRField. It only takes
one argument right now, "name". It will pickle data before sending to
Redis, and automatically unpickle it upon retrieval. I highly recommend
not using this unless you absolutely have to, you are better off designing
native models than trying to stuff in objects.
- Add compression via compressed fields (IRCompressedField in
fields.compressed), using zlib or bz2
- Change python2 to use unicode everywhere, to match python3 behaviour
- Don't try to decode every field with the default encoding. Default is
still to decode/encoding using that encoding (for items not in
BINARY_FIELDS). Now you can
use multiple encodings, or no encodings. There is a new field type,
fields.IRUnicodeField which allows specifying a non-default encoding.

E.x. :

FIELDS = [ IRUnicodeField('name_english', encoding='utf-8'),
IRUnicodeField('name_jp', encoding='utf-16'),
IRUnicodeField('uses_default_encoding') ]

You may also use fields.IRRawField which will do no encoding or
conversions to/from Redis.
- Field Types can implement "toBytes" to allow them to be base64
encoded/decoded, in the event that normal decoding wouldn't work (like the
IRUnicodeField type)
- Implement IRJsonValue which can be passed to valueType of IRField to
automatically convert to json before storing, and convert to a python dict
upon retrieval.
- Implement IRBase64Field as a field type which will base64-encode before
sending and base64-decode after retrieving. This adds a slight processing
overhead, but will save space and network traffic.
- Implement IRFieldChain, which allows chaining multiple field types
together. They will be applied left-to-right when going to redis, and
right-to-left when coming from. For example, to compress and base64 encode
a json dict, use:

FIELDS = [ ... , 
 IRFieldChain('myJsonData', [IRField(valueType=IRJsonValue),
 IRCompressedField, IRBase64Field]),
 ]

- Implement IRFixedPointField, for safely using floats in filtering and as
indexes. You define a fixed precision (decimalPlaces) and values will
always be represented/rounded to that many decimal places. This allows
values to be defined in a standard away independent of platform.
- Ensure "_id" field is always an integer, not mix of either bytes or str
- Add destroyModel function to deleter, so you can do
MyModel.deleter.destroyModel() to destroy all keys related to the model.
Very similar to MyModel.reset([]) but more efficient and direct.
- Change MyModel.objects.delete() ( so a delete with no filters, i.e.
delete all objects) to call destroyModel. This ensures if you have an
invalid model or an incompatible change, you can delete the objects
without fetching anything.
- Use connection pooling to limit connections to a unqiue Redis server
   at 32 (default). I've found that in some cases of network disruption,
   python-Redis can leak connections FAST, and within seconds exhaust all
   private ports available on a system. 
- Support hashed indexes, by passing hashIndex=True to an IRField.
  This will create use an md5 hash in the key in lieu of the field value,
  which will save memory, network bandwidth, and increase speed where
  very large values are indexed.
- Add "compat_convertHashedIndexes" on IndexedRedisSave, i.e.
"MyModel.objects.compat_convertHashedIndexes()" which should be used to
reindex a model when any field changes the value of the "hashIndex"
property.

- Probably a lot more stuff, but this has been sitting around for almost 8
months due to me not having enough time to write up complete documentation!!

-3.1.1 - Apr 15 2016
- Fix some 4.0 docs that slipped in to 3.0 series

-3.1.0 - Apr 15 2016
- Fix where on updating a model, some operations would occur outside
of a
  transaction.
- Fix "reload" method to actually work. It now returns a dict of
fields
 changed -> tuple(old value, new value)
- Allow model.saver.save to take a QueryableList (the return of
all()) by
       changing isinstance to issubclass.


3.0.3 - Apr 11 2016
	- Fix issue with binary fields, they were always seen as "changed" even
	when no update was made, which could have caused unnecessary updates and
	saves
	- When deleting single objects, do entire operation in a transaction
	in case a part fails (like if Redis server is shut down)

3.0.0 - Jan 12 2016
    - Change return type of functions that used to be a list of instantiated model objects to a QueryableList ( https://github.com/kata198/QueryableList ) which allows easilly chaining complex client-side filtering (beyond just equals and not-equals as the DB-side exists). This is not a backwards-incompatible change, but is forwards-incompatible (i.e. your code may require IndexedRedis>=3.0.0 if you choose to use the client-side filtering)
    - Restore the repr-view of bytes in __repr__ instead of the "_BINARY DATA OF LENGTH N_" format.

2.9.0 - Dec 28 2015
	- Add support for BINARY_FIELDS -- directly storing data as binary.
	This is a better option in most cases than BASE64_FIELDS, takes much
	less time, and is more space efficient
	- Implicitly make the FIELDS defined on models to be sets in __init__

2.8.1 - (Oct 06 2015)
 - Fix py3 install

2.8.0 - (Oct 06 2015)
	- Add "connect" method to IndexedRedisModel, which allows using models
	with an alternate redis instance than that on the parent model. It returns
	a "class" that inherits the IndexedRedisModel with the
	REDIS_CONNECTION_PARAMS updated to reflect those as passed in.

2.7.2 - (Sep 24 2015)
	- Remove accidental addition of not-finished code

2.7.1 - (Sep 23 2015)
	- Updates to documentation and examples

2.7.0 - (Sep 23 2015)
	- Change very basic existing model validation to a one-time call, and
	validate on more things.
	- *EXCITING* - Implement BASE64_FIELDS on IndexedRedisModel. Any
	FIELDS that also show up in BASE64_FIELDS will be base64-encoded
	before storage and decoded upon retrieval. This makes it possible and
	very simple to use "bytes" or other binary data. You may need this,
	for example, to store files or other blobs.
	
2.6.0 - (Sep 22 2015)
	- Add copyToExternal which allows copying a model to a redis instance
	other than the one specified for that model
	- Add copyAllToExternal which allows copying the results of a filterset to
	a redis instance other than that specified for the given model
	- Fix inconsistant returns, pk should always be a string
	- Rewrote internal save function to make more sense and allow forcing
	multiple IDs
	- Add reindex method which will rerun the indexes on a list of models. Use
	this as MyModel.objects.reindex() if you add a field to INDEXED_FIELDS
	array which is already existing field to add it to indexes.

2.5.1 - (Sep 11 2015)
	- Add "exists" method for testing the existance of a primary key
	- Add MANIFEST.in file (missed in 2.5.0 -- this is only difference)
	- regen docs
	- Replace example.py with a copy of test.py... not sure why there were two

2.4.0 - (Jul 7 2015)
	- Add some more documentation
	- Cleanup connection handling
	- Remove undocumented _connection override on IndexedRedisModel
	

2.4.0 beta (unreleased to public) - (Jul 2 2015)
	- Add method, isIndexedRedisModel to see if a model extends
	IndexedRedisModel
	- Add method, hasUnsavedChanges which returns bool of if there are
	unsaved changes. Remember, you can see all (local) changes with
	getUpdatedFields
	- Implement __str__ and __repr__
	  Examples of __str__: 
	                         (Pdb) str(z)
	                         '<Song obj _id=24 at 0x7f3c6a3a4490>'
	                         (Pdb) z.artist = 'New Artist'
	                         (Pdb) str(z)
	                         '<Song obj _id=24 (Unsaved Changes) at 0x7f3c6a3a4490>'
	- Examples of __repr__ - This will build a constructor.
	eval(repr(obj)) should build the same object.
		(Pdb) print repr(song)
		Song(_id="6", album='Super Tracks', description='Super
		Nintendo', copyright='Copyright 2014 (c) Cheese Industries',
		title='Nintendo 2', artist='Mega Men', track_number='2',
		duration='1:55')
	- Implement a .copy function for copying models. Takes a single
	argument, copyPrimaryKey. If True, any saves on the copy will change
	the original model. Keep in mind this may cause sync issues. If  False
	(default) only the data is copied.
	- Implement __setstate__ and __getstate__ to make objects support
	being pickled and loaded from pickle strings efficiently.
	- Implement "reload" function on a model. This fetches any updates
	from Redis, if available, and returns a list of field names that were
	updated.



2.3.1 (Jun 27 2015)
	- Add some more docstring
	- make getNextID and peekNextID private methods. They should only be
	used internal.
	- Regenerate docs
	- Remove argument on getEncoding -- didn't do anything

2.3.0 (Jun 27 2015)
	- Change Model.objects.filter(...).delete to fetch minimal data (only
	indexed fields) instead of entire objects, so deleting is more
	efficent.
	- Add getOnlyFields, allOnlyFields, allOnlyIndexedFields, etc for
	getting partial objects
	- Increase efficency of getMultiple function
	- Add more docstrings
	- Distribute pydoc as /IndexedHtml.html 
	- Allow deleter to delete by primary key only on (Model.deleter)

2.2.2 (Jun 25 2015)
	- Fix invalid variable

2.2.1 (Jun 25 2015)
	- Implement getPrimaryKeys (get primary keys at current filter level).
	Takes optional argument to sort by age
	- Implement first/last/random for getting oldest/newest/random record
	- Update documentation a bit

2.1.1 (Jun 21 2015):
	- Allow deleting directly from a filter object
	(SomeModel.objects.filter(...).delete)

2.1.0 (Jun 21 2015):
	- Much better handle unicode in Python 2
	- allow changing encoding via a setEncoding function at the global
	IndexedRedis level

2.0.2 (May 5 2015):
	- fix typos
	- fix deleteMultiple

2.0.0 (May 1 2015):
	- Add support for __ne (not equals) filtering
	- Make filters default to be copies instead of operating on self, which allows them to be passed to functions but retain original value. Old behaviour can be retained doing .filterInline
	- Enhance example with more features
	- Add some docstrings
	- Fix example where __init__ did not pass args and kwargs to parent and thus broke delete
	- add asDict method for representation as a dictionary
	- Change example to use asDict to not print original data
	- Move module to a standard package setup