Use the crlf data scripts to produce a corpus of known-good data in
"git" format (aka ODB format) from a variety of files with different
line endings. `git` created these files running `git add` to stage the
contents then extracting the data from the repository.
We'll use these to ensure that we create identical contents when we add
files into the index.
Re-use the existing crlf data generation script for creating the to-odb
dataset. Also, store the actual file contents instead of the ID so that
we can identify differences instead of detecting that differences exist.
Include a shell script that will generate the expected data of OIDs and
failures for calling git.git to capture its output as a test resource.
Right now, there is no need to differentiate different systems as git behaves
the same on all systems IIRC.
Signed-off-by: Sven Strickroth <email@cs-ware.de>
Move the crlf_data folders reponsible for holding the state of the
filters going into the working directory to "to_workdir" variations of
the folder name to accommodate future growth into the "to odb" filter
variation. Update the script to create these new folders as appopriate.