548 results found
-
Test name and serial id in JSON
Hi,
We request to have a Test Name and Test Serial ID fields in the JSON sent for Test Run events.
1 vote
Christine
responded
Hi,
Could you please clarify your request?
Thanks in advance.
Best regards,
Christine -
Request for JSON payload change.
Hi,
Requesting a change in the JSON file we receive from the webhook for the below events:
1. All issues changes
2. All tests changes
3. All requirements changesFor these events the JSON doesn't indicate which field has changed and the before and after value of the corresponding field. It would be good to have these fields added in the JSON of the above events.
1 vote -
expend the search feature when adding a test to a test run
When adding tests to a test run you can either select from the entire list of tests or run a pre made filter. It would be nice if this had a few more options such as:
- Free form search
- Select Tests from an existing run
- Pick a Folder to add tests from
2 votes -
allow quick info to be shown in test sets and runs like in Test Library
In the Test Library screen if you highlight the ID column you can click info to get a quick over view of the test. This would be good to have when inside a Test Sets and Runs too. Currently you can add the ID column but when highlighting it it doesn't show any info.
This would be useful as it would allow who ever is running the test set to get a brief view of the steps inside each test.
1 vote
Ido Tandy
responded
-
Add to the Report a simple metrics of the overall test progress
To display the percentage of the total number of tests that have been executed (test marked as, Passed, Failed, Not Complete)
i.e. 6 total tests, 2 marked as Not Complete, 1 passed and 3 No Run.
Percentage Test Executed = 50%
Along with metrics to capture the Test completed status (Passed Test Vs Total number of tests) as a percentage.
i.e. 6 total tests, 2 marked as Not Complete, 2 passed and 2 No Run.
Percentage Test Completed: 33%
3 votes -
Implement ways to manage the automation job queue
Currently, there is no way to see what jobs (test runs) have been started or are in the queue to run. Furthermore, there is no way to cancel a job once it has started. Also, only entire test sets can be sent to the job queue, when there are many times when it would be beneficial to submit only a subset of tests in the test set. For example, when you want to re-run a failed test, there is currently no way to do that without executing the entire test set again.
3 votes -
Better test step UI for running and editing test cases
Step result - Rather than just change the colour of the step name/heading, which gets lost in the links and other text above/below it, shade whole the step area/row.
If step N has a result provided (pass, fail, block), and all the steps between step 0 and step N are blank, set steps ) to N all to Pass. This will reduce the number of clicks a tester has to do in order to work through running a test case.
Allow a step to be flagged as "needing review" while the test is being run so it can be completed…
3 votes -
Field data set on 1 screen and showing back on other screens
For example:
I have 1 Custom field browser (list) that have some values in it (Chrome, Firefox Safari etc.)
I set that custom field on the screen:
- Test
- Instance
- IssueWhen I Create new Test and I set the Browser: Firefox when im done with the Test --> Il Create TestSet & Instance = When im in the instance screen I see the Custom field Browser thats already filled in with 'Firefox' that I have selected in the screen Test.
Start the instance and there is a Issue --> Create an Issue, Open the Issue screen and…
3 votes -
Not auto-enable bookmark panel in PDF reports....
Since there are no bookmarks created in the PDF versions of PactiTest reports anyway.
1 vote -
Add date-based and "show most recent only" filter options to "Instance" report types.
It would be very helpful to have a report shat shows only the most recently run test instance and exclude the older test instances for the applicable filter.
3 votes -
1 vote
-
Option to remove 'title' field for every test step
Although a small issue, this becomes irritating when we are creating or importing dozens of test cases.
The 'step title' field (it doesn't actually have a name) adds an extra step which seems unnecessary. In my experience it is rare to need to title each and every test step. Please could you have the more standard Description-Expected Result only format when creating test steps?5 votes -
Test Set table: User should be able to edit fields directly from table
Test Set table: User should be able to edit fields directly from table
2 votes -
Set a default time zone for the account
When new users are added they are initially set up with the time set to UTC (GMT). For most users this is not appropriate and they need to go into settings to set their own time zone. This even applies to users in the UK who need to select London in order to get the daylight saving time applied automatically in the summer. If we could have the facility for an administrator to set the default time zone for an account then most new users would be set up with the right time.
4 votes -
clone or copy views in the tests library
For example we have a view with a lot of child views and now We would like to create the same view but with a different parent filter. So we would like to clone the existing tree and modify it instead of creating it from scratch.
17 votes -
make dashboard update upon opening.
I corrected a spelling mistake on a filter name. Re-opened Dashboard and found spelling mistake still visible. It only displayed the corrected view after I manually re-freshed Dashboard, I think there is an assumption that when you open Dashboard it will be up to date with current information not on be part way through a 15 minute cycle of updates.
1 vote -
1 vote
-
Option to cascade view in filters
add option to cascade a view of a parent field to all of its children
1 vote -
1 vote
-
Improve Fogbugz Integration (Push data to and from Pracitest)
(1) link test cases to externally stored requirements (In fogbugz) and push the externally stored requirement data into Practitest requirements for reporting/dashboard.
(2) Add a Push link next to the Defects field in the Test Run Result screen which pushes a bug report to the external system (FogBugz) without leaving Practitest and make a copy in 'Issues' at the same time for reporting/dashboard.
(3) Hyperlinks to externally held data if only linking to issues/requirements in the external system and not pushing data to and fro (hovering the mouse cursor over a fogbugz case ID in either issues or requirements screen…3 votes
- Don't see your idea?