562 results found
-
Test Sets and Runs --> Test Instances : widen it drastically
It's way too short to know what test it is. It can be expanded but goes back to its minimum size anytime you do anything else. EZ fix with great utility.
1 vote -
When typing requirements numbers into test, link the "Add" button to the Enter key.
I keep forgetting it doesn't save the requirements I just typed and I hit "Enter." It takes me back to the test main page so I have to repeat work. Old habits die hard...
1 vote -
Improve functionality of editing and updating tests from runs
I find the current functionality of updating your test while executing it a bit clunky. Now I do understand that it is meant for small changes where big test overhauls should only be done in the test Steps tab but still I think there are some improvements to be made.
1) If a step has been edited during a run, PT should ask the user if he wishes to 'Update Original test' before navigating away from the page. I find that I often update a step but forget to click the button to update the test at the end.
2)…4 votes -
The Test Sets & Runs views should have a column that displays a link/icon when a bug ticket has been opened for a test within the set.
The list of Test Sets should include a column that displays either an icon or ticket link (to Jira or whatever) for sets that have a bug/issue opened. This way at a glance you can see where bug tickets have been created for failed test sets. Within the set itself there also should be a column that displays the bug tickets. Most of the competitor software already have this feature.
1 vote -
API to the DB, so we can run queries directly to the DB
It can be used to run any report that multiple DB insertions that are not currently supported
6 votes -
to identify updates on requirements
I would like to be able to recognize updates when I am uploading requirements. So when I have uploaded for instance 10 requirements to Practitest by using a 1.0 version of an excel, and I update the document and do a re-upload of for instance a 1.1 version. I would want Practitest to recognize existing requirements and identify changed and new requirements. This would enable me to see what testcases are affected (and should be modified/ deleted) and what requirements need new testcases.
5 votes -
recorder
Test Case Recorder
Other test case management tools have a browser plugin that can be used to record web sessions. The sessions can then be edited and used to either create a defect and/or create a test case. This is incredibly useful and facilitates the creation of test cases and bugs. The test cases created this way can include annotated screenshots for expected results as well as videos.
Here is a link to one of your competitor's implementations of this feature: https://chrome.google.com/webstore/detail/qtest-web-explorer/ihoajedonhepnplmfgdkdjohbmadckdk
1 vote -
Anyone assigned to a issue should be automatically rolled into the notification list
It would be nice if anyone assigned to a issue is automatically rolled into the notification list. Or if this is at least configurable from the settings. What happens now is that the person has to manually add themselves as a watcher to start getting notifications and you can configure to see every notification for ever issue with is a bit overkill.
1 vote -
Create an option in the report center sector for generating a report with our companies logo as default at the right top corner
So when i go to the Report Center that it will be possible to upload our logo in a report.
6 votes
Ido Tandy
responded
-
Able to create a single dashboard across mutiple projects
Would like to have a single dashboard that can cover multiple projects. As a manager of projects this would be highly benefitcial.
17 votes -
Allow field 'Actuel result' inside steps to be mandatory
Some results of a step are so important you need to be sure the tester have described the result. By do it possible to make this field mandatory you can manage this situation.
3 votes -
Test name and serial id in JSON
Hi,
We request to have a Test Name and Test Serial ID fields in the JSON sent for Test Run events.
1 vote
Christine
responded
Hi,
Could you please clarify your request?
Thanks in advance.
Best regards,
Christine -
Request for JSON payload change.
Hi,
Requesting a change in the JSON file we receive from the webhook for the below events:
1. All issues changes
2. All tests changes
3. All requirements changesFor these events the JSON doesn't indicate which field has changed and the before and after value of the corresponding field. It would be good to have these fields added in the JSON of the above events.
1 vote -
expend the search feature when adding a test to a test run
When adding tests to a test run you can either select from the entire list of tests or run a pre made filter. It would be nice if this had a few more options such as:
- Free form search
- Select Tests from an existing run
- Pick a Folder to add tests from
2 votes -
allow quick info to be shown in test sets and runs like in Test Library
In the Test Library screen if you highlight the ID column you can click info to get a quick over view of the test. This would be good to have when inside a Test Sets and Runs too. Currently you can add the ID column but when highlighting it it doesn't show any info.
This would be useful as it would allow who ever is running the test set to get a brief view of the steps inside each test.
1 vote
Ido Tandy
responded
-
Add to the Report a simple metrics of the overall test progress
To display the percentage of the total number of tests that have been executed (test marked as, Passed, Failed, Not Complete)
i.e. 6 total tests, 2 marked as Not Complete, 1 passed and 3 No Run.
Percentage Test Executed = 50%
Along with metrics to capture the Test completed status (Passed Test Vs Total number of tests) as a percentage.
i.e. 6 total tests, 2 marked as Not Complete, 2 passed and 2 No Run.
Percentage Test Completed: 33%
3 votes -
Implement ways to manage the automation job queue
Currently, there is no way to see what jobs (test runs) have been started or are in the queue to run. Furthermore, there is no way to cancel a job once it has started. Also, only entire test sets can be sent to the job queue, when there are many times when it would be beneficial to submit only a subset of tests in the test set. For example, when you want to re-run a failed test, there is currently no way to do that without executing the entire test set again.
3 votes -
Better test step UI for running and editing test cases
Step result - Rather than just change the colour of the step name/heading, which gets lost in the links and other text above/below it, shade whole the step area/row.
If step N has a result provided (pass, fail, block), and all the steps between step 0 and step N are blank, set steps ) to N all to Pass. This will reduce the number of clicks a tester has to do in order to work through running a test case.
Allow a step to be flagged as "needing review" while the test is being run so it can be completed…
3 votes -
Field data set on 1 screen and showing back on other screens
For example:
I have 1 Custom field browser (list) that have some values in it (Chrome, Firefox Safari etc.)
I set that custom field on the screen:
- Test
- Instance
- IssueWhen I Create new Test and I set the Browser: Firefox when im done with the Test --> Il Create TestSet & Instance = When im in the instance screen I see the Custom field Browser thats already filled in with 'Firefox' that I have selected in the screen Test.
Start the instance and there is a Issue --> Create an Issue, Open the Issue screen and…
3 votes
Ido Tandy
responded
-
Not auto-enable bookmark panel in PDF reports....
Since there are no bookmarks created in the PDF versions of PactiTest reports anyway.
1 vote
- Don't see your idea?