Abstract data type (ADT) for truth table - abstract-data-type

Just like there is a nice, common/generic, ADT for Graphs.
Is there something for "truth tables"? I'm trying to find 'class' definitions from projects which already implement a truth table.
If not, how would you go about designing a (generic) truth table ADT?
UPDATE: As suggested in comments, here is what I came up with:
Add (delete) a row: TruthTable.add(input,output) (the first row which gets added is used to extract the length (in bits) of inputs and output. All subsequent row additions are validated against this.) and TruthTable.delete(input)
Get the output(image) for a given input:TruthTable.output(input) or TruthTable.image(input)
Get all the inputs: TruthTable.inputs (Odometer ordering.)
Get all outputs : TruthTable.outputs (Order according to ordering of inputs or odometer ordering?)
See if a table is completely specified, ie, for all the 2n possible inputs, is an output specified: TruthTable.completely_specified?
Other specialized operations can be:
Check if a truth table is invertible: TruthTable.invertible?
Check if two tables are equivalent: TT1 ==? TT2
(I wish sometimes that programming languages allowed method names to end in a '?', for names of those methods which return a Bool)


sorting and splitting in a DFSORT together?

Input file Layout:
01 to 10 - 10 Digit Acct#
53 to 01 - An indicator with values 'Y' or 'N'
71 to 10 - Time stamp
(Rest of the fields are insignificant for this sort)
While sorting the input file by splitting and eliminating duplicates in two ways result in different results. I wanna know why?
Casei: Splitting and Eliminating duplicates in the same step.
SORT FIELDS=(01,10,CH,A,53,01,CH,A)
Caseii: Splitting and eliminating duplicates in two different steps:
SORT FIELDS=(01,10,CH,A,53,01,CH,A)
These two steps are resulting different output. Do u see any difference between both cases? Please clarify.
You are asking to sort on an Account Number (10 characters ascending) then on an Indicator (1 character ascending).
These two fields alone determine the key of the record - Timestamp is not part of the sort key. Consequently if there
are two or more records with the same key they could be placed in any (random) order by the sort. No telling
what order the Timestamp values will appear.
Keeping the above in mind, consider what happens when you have two records with the same key but different
Timestamp values. One of these Timestamp values meets the given INCLUDE criteria and the other one doesn't.
The SUM FIELDS=NONE parameter is asking to remove duplicates based on the key. It does this by grouping
all of the records with the same key together and then selecting the last one in the group. Since key
does not include the Timestamp the choosen record is essentially a random event. Consequently it is unpredictable
as to whether you get the record that meets the subsequent INCLUDE condition.
There are a couple of ways to fix this:
Add Timestamp to the sort key. This might not work because it may leave multiple records for the same Account Number / Inidcator, that is it may break your duplicate removal requirement
Request a stable sort.
A stable sort causes records having the same sort key to maintain their same relative positions after the sort.
This will preserve the original order of the Timestamp values in your file given the same key. When the removal of duplicates occurs DFSORT will choose the last record from the set of duplicates. This should bring the predicability to the duplicate elimination process you are looking for. Specify
a stable sort by adding an OPTIONS EQUALS control card before the SORT card.
EDIT Comment: ...picks the VERY FIRST record
The book I based my original answer on clearly stated the last record in a group of records with the same
key would be selected when SUM=NONE is specified. However, it is always
best to consult the vendors own manuals. IBM's DFSORT Application Programming Guide only states
that one record with each key will be selected. However,
it also has the following note:
The FIRST operand of ICETOOL's SELECT operator can be used to perform the same
function as SUM FIELDS=NONE with OPTION EQUALS. Additionally, SELECT's FIRSTDUP,
ALLDUPS, NODUPS, HIGHER(x), LOWER(y), EQUAL(v), LASTDUP, and LAST operands can be
used to select records based on other criteria related to duplicate and non-duplicate
keys. SELECT's DISCARD(savedd) operand can be used to save the records discarded by
LAST. See SELECT Operator for complete details on the SELECT operator.
Based on this information I would suggest using ICETOOL's SELECT operator to select the correct record.
Sorry for the misinformation.
The problem is as NealB identified.
The easiest thing to do is to "get rid of" the records you don't want by date before the SORT. The SORT will take less time. This assumes that SORTOUT is not required. If it is, you have to keep your INCLUDE= on the OUTFILs.
SELECT is a good option. SELECT uses OPTION EQUALS by default. The below Control Cards can be included in an xxxxCNTL dataset, and action from the SELECT with USING(xxxx). SELECT gives you greater flexibility than SUM (you can get the last, amongst other things).
The whole task sounds flawed. If there are records per account with different dates, I'd expect either the first date or the last date, or something else specific, to be required, not just whatever record happens to be hanging around at the end of the SUM.
SORT FIELDS=(01,10,CH,A,53,01,CH,A)
Or, if the Y/N cover all records:

ElasticSearch natural sort on a single complex field

This is for ElasticSearch 6.4.1.
The client is an archive and the records have a "RefNo" (reference number) field which is how they mostly identify the records. It's not a simple field, though, but a slash-delimited field that represents a hierarchy of records where each identifying section can be a mixture of numbers and letters, so that for instance "abc" represents one collection and "a142" another: "abc/foo", "abc/bar", "a142/1/letters", "a142/2/letters", "a142/10/letters" are all various items at different levels. They look pretty abstract to me but to the archivists they're actually meaningful.
I guess you can anticipate the problem. I want to be able to order on this field (actually a keyword version of it called RefNo.keyword) in a way which gives the obvious, natural order:
and so on. The trick is in getting the numerical sections to order in natural numerical order rather than alphabetically, whereas the rest of it is alphabetical.
In another context I have a list of the child records of a single record, and in that case the solution was to order first on the length of the field and then numerically:
But of course that only works if the values are all identical apart from the last section.
For the general case, I have a feeling there is something very simple that I'm missing. Is that just wishful thinking?

SAS insert column for dynamically determined levels

I am attempting to set up SAS to do something I am able to easily do in Excel, but am unable to find a way to do effectively. Given the first two tables shown here (dubbed TREE and LEVEL, respectively), I am trying to end up with the third table (FINAL_TREE).
Adding in the Level column to TREE, so that it becomes FINAL_TREE works as follows: any given tree must have a number Apple which is greater than or equal to Apple_Req for a given Level, as well as Orange greater than or equal to Orange_Req. So a Tree is given a Level to which it meets all given requirements.
So in the example tables, Tree3 is given Level1, despite the fact that it would easily be Level3 if not for its low Orange count.
In Excel, this can be done using INDEX and finding the MIN of two MATCH functions, but I don't think that can be directly translated into SAS. I imagine there is a way to set this up using explicilty defined nested IF statements, but I am hoping there is a solution which can handle a LEVEL table with any number of levels (so long as the requirements are set up correctly).
In fact, this is quite a bit easier in SAS - in part because there are a lot of different ways to do this.
The most straightforward is probably using SQL, if you're familiar with it. The most similar to what you're doing in Excel, though, is Format, and perhaps the fastest as well.
proc format;
value appleF
1-<4 = '1'
5-<15 = '2'
value orangeF
5-<15 = '1'
16-<30 = '2'
30-high= '3'
Now, you can convert the values using put and then use min just like you would in Excel. Basically this replaces your index.
data want;
set have;
level = min(put(apple,applef1.),put(orange,orangef1.));
You can also produce a format from a dataset directly - see this paper for example for using CNTLIN option on PROC FORMAT.

SAP ABAP Infoset Query - SELECT SUM and Duplicate lines

I am having trouble figuring out where to start/how to get the correct output.
I am very new to ABAP coding. I am trying to create an Infoset query and need to do a bit of coding in SQ02.
I have two tables joined - one being RBKP as the header for Invoice Receipts the other is RBDRSEG for the Invoice Document Items.
The query needs to run following some irrelevant parameters/variants, but when it does so it needs to - - -
Look in RBDRSEG for all same Document numbers RBKP-BELNR EQ RBDRSEG-RBLNR
In doing so RBDRSEG may or may not have multiple line results for each Doc No.
I need to total the field RBDRSEG-DMBTR for each Doc No. Result.
(If there a 5 lines for a Doc. No.; DMBTR will have a different value for each that need to be totaled)
At this point I need the output to only show (along with other fields in RBKP) One line with the SUM of the DMBTR field for each Doc. No.
I then need to have another field showing the difference of the Field RBKP - RMWWWR which is the Invoice Total and the Total that was calculated earlier for that Doc. No. for field DMBTR.
If you could help, I would be incredibly grateful.
first you need to define a structure that will contain your selection data. An example structure for your requirement may look like this:
don't forget to activate the structure and make sure it doesn't contain errors.
now create the selection report. To use a report as the data selection method, you need to add two comments, *<QUERY_HEAD> and *<QUERY_BODY>. *<QUERY_HEAD> has to be placed where your start-of-selecton usually would go, *<QUERY_BODY> inside a loop that puts the selected lines into an internal table with the same name as the structure you defined in SE11.
I made an example report to show how this would work:
so_belnr for rbkp-belnr,
so_gjahr for rbkp-gjahr.
itab type standard table of ZSTACK_RBKP_INFOSET_STR,
lv_diff type dmbtr.
*here your selection starts.
select rbkp~belnr
from RBKP left outer join RBDRSEG
into corresponding fields of table itab
where rbkp~belnr in so_belnr and
rbkp~gjahr in so_gjahr
group by rbkp~belnr rbkp~gjahr rbkp~rmwwr rbkp~waers.
loop at itab into wa_itab.
lv_diff = wa_itab-dmbtr - wa_itab-rmwwr.
move lv_diff to wa_itab-diff.
modify itab from wa_itab.
* this is the part that forwards your result set to the infoset
the sample report first selects the RBKP lines along with a sum of RBDRSEG-DMBTR for each document in RBKP. After that, a loop updates the DIFF column with the difference between the selected columns RMWWR and DMBTR.
Unfortunately in our SAP System the table RBDRSEG is empty, so I can't test that part of the report. But you can test the report in your system by just adding a break point before the first loop and then start the report. You should then be able to have a look at the selected lines in internal table ITAB and see if the selection works as expected.
caveats in the example report: both RBKP and RBDRSEG reference different currency fields. So it may be possible your values in RMWWR and DMBTR are in different currencies (RMWWR is in document currency, DMBTR seems to be in default company currency). If that can be the case, you will have to convert them into the appropriate currency before calculating the difference. Please make sure to join RBKP and RBDRSEG using both the document number in BELNR/RBLNR and the year in GJAHR/RJAHR (field in RBDRSEG is RJAHR, not GJAHR, although GJAHR also exists in RBDRSEG).
when your report works as expected, create the infoset based on your report. You can then use the infoset like any other infoset.
I just realized that because you wrote about being new to ABAP I immediately assumed you need to create a report for your infoset. Depending on your actual requirements this may not be the case. You could create a simple infoset query over table RBKP and then use the infoset editor to add two more fields for the line total and the difference, then add some abap code that selects the sum of all corresponding lines in RBDRSEG and calculates the difference between RMWWR and that aggregated sum. This would probably be slower than a customized abap report as the select would have to be repeated for each line in RBKP, so it really depends on the amount of data your users are going to query. A customized ABAP report is fine, flexible and quick but may be overkill and the number of people able to change a report is smaller than the number of people able to modify an infoset.
Additional Info on the variant using the infoset designer
first create a simple infoset reading only table RBKP (so no table join in the infoset definition). Now go to application-specific enhancements:
In my example I already added 2 fields, LINETOTAL and DIFFERENCE. Both have the same properties as RBDRSEG-DMBTR. Make sure your field containing the sum of RBDRSEG-DMBTR has a lower sequence (here '1') than the field containing the difference. The sequence determines which fields will be calculated first.
Click on the coding button for the first field and add the coding to select the sum for a single RBKP entry:
Then do the same for the difference field:
Now you have both fields available in your field list, you can add them to your field group on the right:
As mentioned before, the code you just entered will be processed for each line in RBKP. So this might have a huge impact on runtime performance, depending on the size of your initial result set.

Binding the Result of an Aggregate Function to a Projected Variable

I am trying to count the number of values for a given property and output each of the retrieved resources along with that number. I am trying to use BIND to store the result value of the COUNT function in a variable and project that variable to my results. However, that value seems to be empty and I do not understand why that is.
My query currently looks like this:
?a <http://www.w3.org/2000/01/rdf-schema#label> ?b.
BIND(COUNT(?b) AS ?c).
I think I will have to group by ?a, though I am not sure yet how to proceed when I want to do that for several properties, but that is not the concern of this question: For now, I simply want to find out why ?c appears to be empty.
Shouldn't - for now - there be exactly one label per resulting row? If so, why isn't the literal 1^^xsd:integer bound to ?c - or at least some high number representing the total (ungrouped) number of labels (similarly to what happened here) -, for example on the following endpoints:
Austrian Ski Team
I am aware the feature I am looking for may not be supported by some or all of these implementations - but if so, it seems unusual that the COUNT is simply "swallowed" without an error message (I did get an error message on some other endpoints for the syntax).
Thus, my question is: Why is the return value of COUNT empty?
Is the COUNT function in that position not recognized by the endpoints?
As it seems to be syntactically valid there, is that a shortcoming of current SPARQL engines, or is that by design?
Is the COUNT function evaluated only later (and if so, why doesn't it at least return something like 0)?
Your query isn't actually legal. There's a SPARQL query validator at sparql.org, and it reports a syntax error on count:
Syntax error:
Line 4, column 8: Aggregate expression not legal at this point
I can't speak as to why some engines aren't choking on it. A number built in SPARQL functions can produce errors, and that typically ends up binding variables to seemingly empty results. Perhaps some SPARQL engine developers took this one step farther and make calls to missing functions return an error (which looks like an unbound variable). That's probably going to be a case by case investigation, and you'll need to contact developers of those products separately.
At any rate, you probably want to end up writing a query more or less like the following. You can use (aggregate-function(args) as variable) in the projection portion of the query, and that's how you can bind the number of ?bs per ?a to ?c once you've grouped by ?a.
select distinct ?a (count(?b) as ?c)
where { ?a rdfs:label ?b }
group by ?a