SoftTree Technologies SoftTree Technologies
Technical Support Forums
RegisterSearchFAQMemberlistUsergroupsLog in
FR: Scratchpad
Goto page Previous  1, 2, 3  Next
 
Reply to topic    SoftTree Technologies Forum Index » SQL Assistant View previous topic
View next topic
FR: Scratchpad
Author Message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
Is it the contents of the Unified XML view tab that gets saved in the SA_UNITTEST_LOG table? That's what could be used for checking and test result validation?
Sun Apr 04, 2021 5:57 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7259

Post Reply with quote
Yes, that's it.
Sun Apr 04, 2021 6:44 pm View user's profile Send private message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
Then I'm afraid, it's a no.

There would be a small drawback about not being able to do the call and the check in the same test case but that's only a matter of term definition. Test cases now have Initialize and an Execute tab. That would change into some cases being the Initialize and others being the Check part of the test. That would require manually maintaining the connection between the Init and Exec parts that would be quite easy to mismanage. Regardless, that's doable.

The problem is still with the format. The actual and expected result sets have to be in the same format. The original result set is a table (or several tables). It is easy to define the expected result sets as tables. Making the same in XML, while still easy, is less so. Doing checks between the expected and the actual results for example: does the actual result set contain any records where certain columns equal certain expected constant/variable values can be done both for tables in SQL and for XML paths. Doing the same while also checking that some values are in a range or defined by a function (defined in SQL) is, as you said, still a piece of cake in SQL. I'm not sure it's even possible using XML. So the result sets would have to be converted from the XML back into tables. Some RDBMSs have native solutions for that, eg. I'm not proficient with the method SQL Server uses but I know about it, it can select from an XML, probably not only values but complete tables as well. It's likely that Oracle and Posrgres have something similar too. It's also likely that those solutions are syntactically incompatible with each other, and because of that, the tests are no longer (easily) portable, meaning that in some cases the same tests have to be designed once but implemented multiple times.

MariaDB is even less capable in that department. It can only access XMLs through files, so no loading them from the table SA_UNITTEST_LOG. Saving it to a file instead does not work either as (unless I'm mistaken) that file is saved on the local machine and not on the server where it could be loaded. And even if the file was there, MariaDB can either use those files by two different methods. The first one is by creating tables using the CONNECT engine. That engine is not available for us, so it's kinda out of question. The other method is the LOAD XML. It loads the structure from the XML.

And as I've seen the structure in the XML you save is not the structure of the result set. It's a structure that holds the data describing the structure of the result set. Minus the data types, which data is lost when saving to XML (without an XSD). So every time the result set is needed, it would require extracting the XML into a table and then extracting the data from that table to create the original result set table and then populate the table. That would be different for each different result set, hence it would either need reinventing the wheel for every result set or design a generic extractor. This could work but would still lack the data types. Creating the check part of the test would demand a formidable amount of resources. Maintaining/changing the test too. The issue with that is every time I'm being asked for an estimate, I imagine that estimate will be most likely way off and even when it is accurate, it will be deemed not worth doing it. And indeed, sometimes doing things is hard enough for them not being worth to be done at all.

Regarding SQLite, it has no native support for XML whatsoever (that I know of) and I'm perfectly sure I'm not up to the task of writing an XML parser in pure SQL.

It's great that you managed to turn something from impossible into possible, hats off. But this whole stuff with doing the checks using XML is kinda OOP in C. The fact that you could do it, does not mean you should.


On the other hand, I've skimmed through the plugin development user guide. Am I correct in my assumption that it is possible to copy the operation of Unit Testing by a plugin?
Mon Apr 05, 2021 7:50 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7259

Post Reply with quote
If the high level requirements can be reduced to a set of simpler requirements, it would be easier to find proper solutions, kind of divide and conquer.

I thought from the previous conversations that the main challenge is dealing with multiple result-sets returned by a single procedure. And if so, the first step would be to address that, find a good solution for converting multiple resultsets to an actionable something. So now if we want, we can have them converted to an XML string. It's true that may require using database specific code to query XML, of maybe not. It depends on the specific requirements. For example, if we simply need to verify that our stored procedure returned two resultsests for a given set of input parameters x and y and both result sets contain one row, and the second resultset contains value z in the column "ProductNumber", that can be validated with a single SELECT...WHERE ...LIKE... We do know which substrings to look for and their sequence. Using my previous example, that could be as simple as

Code:
SELECT count(*) AS count_z_values
FROM sa_unittest_log
WHERE unit_test = 'Some unit test name here'
   AND test_case = 'Some test case name here'
   AND results LIKE '%name="ProductNumber"〉AR-5381〈%'


For a more advanced validation, one can use regular expressions REGEXP_INSTR(results, .) > 0


All databases support native functions for working with XML data using XPath . The syntax is somewhat different and so are the capabilities. Oracle, SQL Server, Postgresql support full set of functions enabling querying XML as if XML was a schema in the database with tables; relatively easy indeed. MariaDB is weakest of them all. It literally provides just two functions but they might be just good enough for most simple tasks, like for checking that a certain value is present in a certain location within the XML. For example please see https://mariadb.com/kb/en/extractvalue/
That and similar methods can be used to obtain a value or values from the XML that we may need as parameters for the next step. Again using the previous example:

Code:
SELECT ExtractValue(results, 'results/table[@id=2]/row[@id=1]/cell[@Name=ProductNumber]') AS z
FROM sa_unittest_log
WHERE unit_test = 'Some unit test name here'
   AND test_case = 'Some test case name here'


That solves the issue with chaining procedures and test cases enabling us to get results from the previous test case and use them as inputs in the next test case.

If that isn't enough and we need totally platform agnostic solution, then the result handling and/or XML parsing should be performed outside of the database not using SQL code. Using custom plugin is one of the options. Plugins cannot be hooked to the unit tests directly. But it can read the unit test project file, which is by the way a regular XML file, read its test case definitions and execute them one by one, handling results as needed. For example, special comments added in the beginning of test cases can be used to pass instructions to the plugin to tell it what to look for in the test case results.
Mon Apr 05, 2021 11:26 pm View user's profile Send private message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
I've already brainstormed through the things you mentioned and I knew that it became a possibility as soon as you did save the result sets (in any form). The "divide et impera" part is exactly what I was referring to when I said just because you could, it does not mean you should. You see, the division operator does not come free. It has a certain cost that increases rather steeply with the check condition's complexity, all the way to the level where it being worthwhile becomes questionable. It is also re-applied for every non-trivial change made to the condition. Restoring the tables from XML would still be the more cost-efficient solution. Even with the data retrieved from XML needing plenty of transformation and a rather nasty query to be able to handle arbitrary result sets for the goal of them being restored to their original format it would be a universally usable method, free from the encumbrance of the XML. It would be worth doing that, as it would have to be done once.

Except for that dimwit MariaDB, which is unable to do that and that's the system this stuff would be used the most (at least by us). By the way, SQLite is another fine specimen that shows no mercy when it comes to XML, there are exactly zero matches for this search, but that does not really matter, as it lacks stored procedures as a feature as a bonus. Therefore it does not fall victim to the limitation of not being able to store/check their result sets separately, so it can be unit tested completely with even the basic functionality of Unit Testing. Great (lack of) feature in SQLite, I must say :)

As for the plugin, I do not want to hook it to the unit tests. My intention is to replace the Unit Testing feature altogether. That's why I asked whether it is possible to copy the way Unit Testing works. It seems to me it is. No special comments needed, I guess.
Tue Apr 06, 2021 1:25 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7259

Post Reply with quote
I think we agree that's possible. We spoke about executing SQL code and looping through the the results. The next issue/question is what to do with the results. Where would conditional logic be stored and handled for each specific result? and how to define the expectations for each test case...
Tue Apr 06, 2021 2:05 pm View user's profile Send private message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
I have a somewhat vague concept about how Unit Testing works. Please, correct me if I'm mistaken. I guess it will take quite some time to imitate everything it does, but I hope that its basic functionality (including the improvements I need) can be copied in a week or two (or three).

Currently, each test case has several tabs, with Initialize, Execute, Cleanup, and Check to be the ones whose operation I need to improve. When I've first met Unit Testing I thought it was way too weak to be useful. After all, decide test success/failure based on a single value that is entered in the Check tab? Until I realized that arbitrarily complex conditions can be distilled down to a couple of conditional error messages and an 'OK' char(2) value that can be compared to that value. That's brilliantly stupid simple, and I don't think it could be enhanced. Initialize is rather self-explanatory, not much to be done there either.

My first problem with the rest of the tabs was how the Execute tab related to the Check tab. The expected result entered in the Check tab is actually compared to the first column of the first row of the first result set that is returned when executing the contents of the Execute tab and that seemed to be an issue.

In my earliest tries with stored procedures, this meant that I had to move every stored procedure call that returned any kind of result set to the Initialize tab to prevent them from hijacking the check value. This worked but it sort of 'stained' the Initialize tab and I still couldn't unit test such stored procedures (properly, or to be honest, at all). Then I tried unit testing them by copying their guts into Execute and add checks there but that was also a failure. It quickly turned out that having to constantly update the test each time the stored procedure changed was tedious and error-prone. Also, MariaDB does not allow conditional executions alongside a couple of other things outside stored programs, as they call procedures. So that was totally a dead-end, and made me think perhaps I shouldn't fetch the final from the Execute tab.

My idea is to keep Initialize and Cleanup as they are. Have Execute loop through the results with the method you showed me in this post here. Add another tab the way you added the Results tab as can be seen in the screenshot you provided as a proof of concept. That new tab would be called Evaluate and any conditional logic would have to be implemented there and it would all come down to the same method of producing a single simple value that is to be compared to the one in the Check tab. So the only changes I'd like to make is to move the fetching of that first-first-first column to that Evaluate tab and iterate over result sets in Execute to push them back into the DB, each to a separate table. Those tables could be named following some user-defined pattern, eg. unit_test_schema.{case_name}_result_#, where the first result of the case whatever would be pushed back into the freshly created table unit_test_schema.whanever_result_1, the second one into unit_test_schema.whatever_result_2, and so forth. It can be expected that the users are aware of the number of the result sets and their structure, so putting together the evaluation logic using those tables in the Evaluate tab should be relatively easy and straightforward (at least compared to doing the same in XML).

Basically, nothing really changes in the way the evaluation is being done, except that a few checks that couldn't be done in the past are now possible. It also allows doing the executio and the evaluation in the same test case instead of having to do the actual check in a subsequent test case.

And again, I admit this could be done with XML, which is much easier to store. But it would be much harder when doing the evaluation logic. And I'd rather do something very hard once and then all the rest (relatively) easy than to always have to bring my A-game when trying not to screw up the most important part of the test.

As the last step, the Cleanup tab would do the user-defined cleanup in addition to automatically dropping the tables used for storing the results from the Execute tab. Or (optionally) allowing to let them be in case further examination of the result sets is desired. Sparing them in the first run could also help to develop the test logic by making code completion available in Execute tab.

Showing the results when running the tests themselves is not that important, imho, as they could be made available after the test run by simply opting for not killing them in the Cleanup. Nevertheless, it could be a nice bonus. But I'm afraid that would add another week. I have some 20-years-worth-layers of dust to wipe from my programming skills.
Wed Apr 07, 2021 4:55 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7259

Post Reply with quote
Here is some food for thoughts. I believe out implementation is similar to how such things are done in desktop and web applications. A developer who develops some function F in a broad meaning of that "function" term is also responsible for developing one or more test cases, which along with F get deployed to Continuous Integration environment (. Everytime a new application code change is deployed, the application Build System runs the entire suite of available test cases to verify the deployed changes don't break anything, and don't cause any regressions. It alerts application owners by showing failed statuses on some dashboard screen and/or sending emails. A test case typically contains code calling F and verifying that it worked without triggering errors, and it returned something, a non-empty result. If an exception is raised within F, the test case fails. If there is no exception raised by F but its result is empty, the code in the test case raises its own exception using "assert" or similar method. The ultimate end result is the same. In more sophisticated test cases with dependencies, developers may need to call functions A, B, and C, before they call F, all of which is done within the same test case.

We are trying to support the same stuff here on the database side. We can execute our stored procedure SP in a simplest case with some hard-coded parameters. If an error is raised, our test case fails. We can also optionally check for the results. Checking the returned value is the simplest possible check, it can be used with a simple statement like SELECT count(1), or similarly we check stored procedure return code using SET @ret = EXEC SP...; SELECT @ret.
In more complex cases developers who write test cases may need to add additional code before and after SP call to prepare its execution state, call SP, and then verify the actual results in the database evaluating the data changes. If we don't like the result, we raise an exception and that makes the test case fail. Initialize and Clear sections make it a bit more structured and self documenting, you already described their use. But the original concept is still like

--- execute something (this could be one procedure or a batch with many statements)
EXEC executable procedure or code goes here

-- verify results (could be as complex as needed)
IF NOT EXISTS (SELECT ...)
RAISERROR ...

Strictly speaking there is no need to return a value and check it, just raise an error if the result isn't satisfactory. Of course MariaDB is not like SQL Server, it cannot execute anonymous SQL batch containing RAISERROR. But we can still use simple tricks like below to fail test cases when we want them to fail

IF NOT EXISTS (SELECT ...)
SELECT 1/0 AS `this_will_trigger_error`;


Last edited by SysOp on Mon Apr 12, 2021 12:28 am; edited 1 time in total
Wed Apr 07, 2021 6:19 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7259

Post Reply with quote
I forgot to mention that there could be different objectives for unit testing, which may require different solutions. In the Unit Testing Framework we support most of them, arguable to a different degree.

1. Functional testing - typically this kind of testing uses assertions to check that each feature is implemented correctly by comparing the results for a given input against the specification. In our framework instead of assertions we rely on exceptions raised by test cases coupled with SQL based conditional checks for validating execution results. I personally feel that this is not very obvious. Perhaps Execute can be split into Execute and Validate tabs, while the SQL code from Validate is executed in tandem with Execute within the same SQL batch.

2. Regression testing - tests across components to ensure that all features continue working without errors. This is less strict than Functional testing as there is no need to verify the results for their quality, it's only their availability. Functional testing can be used for this one too if there is sufficient coverage. What we don't support now, and which I personally think would be a very good enhancements, is establishing a baseline. Basically piggybacking on the results saving feature we discussed before and then in the consecutive runs comparing new results against the baseline. In case they don't match, treat that condition as a regression.

3. Performance testing - captures performance of individual features as well as the application as a whole, and compares them against previously established baseline and requirements. It's quite common to see applications slowing down as their development continues and more code and data get added to the business logic and the database, and it's kind of important to check that the required performance levels don't get breached. We already support execution time checks as well as stress testing.
Thu Apr 08, 2021 12:50 am View user's profile Send private message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
Thank you very much for the additional info. Most of the changes I want to make are sort of internal only, I try not to modify its appearance or functionality too much compared to Unit Testing in case anyone else is interested in using that plugin. Some of those changes would require a different approach for different RDBMSs so I might not be able to make it work for each of them, but I guess the missing pieces can be created by anyone needing it.

Raising/throwing exceptions only to fail a test also requires a different approach for different RDBMSs, I think using CASE to evaluate results offers a much more uniform (and readable) way to make the tests succeed or fail. For example, as you already stated, MariaDB lacks the ability for executing anonym batches. Besides that, earlier versions (including the one we're still using in a very large number of instances) haven't got any kind of proper error handling. It lacks things like RAISERROR, and you might also be surprised that (I guess it's depending on some server settings) not only SELECT 1/0 will not throw any errors, it won't even trigger a warning. It simply dies silently.

To make it worse, we've recently run into another remarkable "feature". It affects MariaDB 5.5.57 and who knows which others. When calling a stored procedure that returns multiple result sets, as soon as the first of the many is created, any issues with any of the subsequent ones will make the stored procedure call "fail successfully". We've seen it in both our developed applications and in the SQL Editor. In both cases, the connection says the stored procedure call was successful. The only simple indicator showing there was some problem is the SHOW WARNINGS/ERRORS. But that cannot be checked for using conventional methods in the DB (therefore neither in Unit Testing either). This also means that since currently this cannot be detected at all, any kind of tests, be they functional, regression, or performance ones, will not be reliable. The only way to detect these stealthy stalkers is to check the result sets (or SHOW WARNINGS) using the new method. For example, if the procedure is expected to return three result sets and it returns only one or two, there's obviously something wrong. This is another thing that could only be done using the saved result sets (either in table or in XML).

I still have some questions about components I don't understand and couldn't find information about in the Dev's Guide:
- there's a component called TsaSyntaxMemo which should provide rich edit control and syntax highlighting. I noticed that it offers the same assistance like the one I get in SE but I could not enable syntax highlight. How do I do that?

There are some things I'll probably not be able to replicate, such as the scheduled execution of the unit testing or sending the results in email as I've got no idea how to do those and currently feel no inclination to spend time learning.
Sun Apr 11, 2021 3:44 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7259

Post Reply with quote
The DataTables property of the Connection object can be used to check how many resultsets were returned from the executed SQL code

Code:
with Connection do
begin
   if not Execute( 'EXEC my_proc ...' ) then 
       // execution failed, handle this error...
       // specific error message can be obtained using LastError property of the Connection object,
       // for example RaiseException( LastError );   // this assumes that the exception raised here is going to be handled by the caller
       ...

   if DataTables.Count < 3 then
      // we expected 3 results but got less than that, fail the case ...  something like RaiseException( 'Did not get all expected results from ...' );
     ....
end




You're correct TSaSyntaxMemo is an embeddable editor. It can be used with different programming languages, it supports flexible programmable model and it's preconfigured for SQL Intellisense. That's the good news. The bad news the syntax highlighting is handled by a different component TsaSyntaxAnalyzer. That component is also available in the palette and can be dropped on to the same form or pane and then selected in the SyntaxAnalyzer property of TaSyntaxMemo. However as of version 11.5 it doesn't provide predefined code lexers. It gets hairy there, the active lexer needs to be database type aware, it needs to use different syntax highlighting rules for different database types. That's a to-do item left for future versions. For now, in plugins the text in TSaSyntaxMemo is always black.
Mon Apr 12, 2021 2:13 am View user's profile Send private message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
I still lack a couple of things to make this work properly.

To be able to reroute the results into the DB the easiest way I wanted to parse their structure, create them one-by-one, query them one-by-one, and then use the AddRow and Save functions to stash the data. It seems that will not be possible after all. To do that I'd had to clone the Connections.DataTables to retain the results while using Connection.Execute to create their DB destinations but I had found no way to clone something. If that's impossible, I still can parse them to concatenate a single monolith of an SQL query that does it in a single run but that will be ugly if I want to be able to handle the different syntax for the different RMDBSs.

I'm also facing other difficulties. I'd need to create a few classes, one for the unit testing project that could store the properties of the project, which would include a vector of unit tests (another class), which in turn would include another vector, the one with the test cases (containing the test case properties like queries for Initialize, Execute, Evaluate, etc.). But I couldn't figure out how to create let alone instantiate a class. I've seen some mentions of the Class keyword in the .cs source files in the plugins folder but all of those seem to be ending up in .dlls instead of using them directly.

The FastScript_syntax.pdf mentions classes among the features but also says "no type declarations (records, classes) in the script; no records, no pointers" etc. (page 5.) What does that exactly mean? Is it really not possible to create classes? I doubt that, however, if that's the case, this can be done as a collection of arrays of different types but that would be all the nine hells regarding code maintenance and refactoring. I haven't intentionally done anything like that since I was 12.
Tue Apr 13, 2021 12:24 pm View user's profile Send private message
SysOp
Site Admin


Joined: 26 Nov 2006
Posts: 7259

Post Reply with quote
Let me try answering your questions as well as I can.

Two kinds of plugins are supported:

1. Plugins utilizing FastScript engine, supporting 4 different programming languages. You can develop them using the provided Plugin Development IDE. The IDE supports developing graphical forms and non-graphical code units. Both forms and units are classes in a broad sense of that term. The file extensions depend on the type of the programming language you select. You can use multiple files/classes in your plugin. How you reference them from other files depends on the selected programming language, for Pascal that would be "uses" keyword while for C++ that would be the "include" keyword, and so on... All inclusion statements appear in the beginning of a file.

2. Plugins utilizing .NET framework, which you typically code in Visual Studio or other .NET development environment. By default their files will have .cs extensions, for C#. Here you can use multiple files too. You would reference them the same way you reference system .NET classes using "using" keyword.

You can find source code for example plugins in C:\Program Files (x86)\SQL Assistant 11\plugins. There are examples for both types of plugins.


I'm afraid that with the first kind of plugin you wouldn't be able to do "AddRow, and Save" things, you would need to write it in SQL, using INSERT queries. With .NET plugins you can use everything that is available in .NET, for example, you can use its Recordset object supporting AddRow and similar high level methods, which can provide the wanted code generalization and database abstraction. But... but the price for that is that you would need to open your own ADOConnection and don't use the connection provided by SQL Assistant via its automation interface. More details can be found here
https://docs.microsoft.com/en-us/sql/ado/reference/ado-api/recordset-object-ado?view=sql-server-ver15
Of course when it comes to ADOConnection, you need to deal with ADO database drivers and their specifics, while with with FastScript based plugins all that is kind of "encapsulated" for you, the Connection object provides generic interface to all sll supported databases.


Back to the file inclusion. Here is an example for Pascal based plugin.
uses
'Utilities.pas',
'Messages.pas';

...
Utilities.SomeProcedure( 1, 'abc' );
ret := Utilities.SomeFunction( 1, 'abc' );
...
Messages.HelloWorld( );
....
Wed Apr 14, 2021 2:01 am View user's profile Send private message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
Not being able to use AddRow and Save does not really matter, it's just a minor inconvenience building an SQL query. But the other things you said do. Let me summarize this to see if I understand it correctly.

Type 2 plugins are not written in the Plugin IDE that comes with SA, they are written in VS or some other IDE. They behave more or less like separate applications and are plugins only in the sense that they can be called from SA and integrated into SA to some extent. They have the greatest flexibility regarding what can be achieved. However, if I want to write them to be used with all kinds of RDBMSs, I have to deal with every little sh... er... every minute detail and nuances that are otherwise already hidden by SA. This indicates a great deal of work to be done and might as well become a completely independent application. Hey, this could even sell. Conclusion: it also would take months.

Type 1 plugins are written in the Plugin IDE using FastScript and much of the work is either already done by SA (eg. connection handling by providing the Connection object) or can be made done by SA (eg. using Connection.Execute, Connection.DataTables, etc.). In return, I have no access to anything (eg. classes, vectors, methods, properties, etc.) that I'd need to be able to (easily) create a complex (data) structure and such as a unit testing project with its dynamically changing number of test units (and their dynamically changing number of actual test cases). Still, it can be done (assembly does not have those either). It just implies a great deal of work to be done. Conclusion: this could also take months.

Are the following assumptions correct?

For type 1 plugins, the unit files you mention are not exactly classes, rather than instances (some languages call those base objects). They also cannot be instantiated any further, that is, when I create and include the unit "Messages", that behaves as an object created from the "Messages" class instead, whose declared variables I can use as the class/object variables, the functions are the methods of the "Messages" object, but I cannot have another object of the class "Messages", because there's no way to create a new one. Thus, I cannot create a UnitTestProject class that has a map/vector of UnitTest objects, each of those having a map/vector of TestCase objects, both those classes having their respective Execute, Evaluate, etc. methods. If I want to have something like that, I have to use fixed-size arrays and pray that the size I chose is sufficient.
Wed Apr 14, 2021 10:32 am View user's profile Send private message
gemisigo



Joined: 11 Mar 2010
Posts: 1833

Post Reply with quote
Most of the business logic related to the improvements I want to make is designed and/or implemented. Now I only have to copy/imitate the rest of the Unit Testing's features, which is about 90%. Some of them might not be possible like the scheduled execution of the unit tests and the color-enabled log window.
Thu Apr 15, 2021 2:53 pm View user's profile Send private message
Display posts from previous:    
Reply to topic    SoftTree Technologies Forum Index » SQL Assistant All times are GMT - 4 Hours
Goto page Previous  1, 2, 3  Next
Page 2 of 3

 
Jump to: 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


 

 

Powered by phpBB © 2001, 2005 phpBB Group
Design by Freestyle XL / Flowers Online.