sha*_*ong 6 python testing pytest python-3.x
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
@pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
Run Code Online (Sandbox Code Playgroud)I might be able to treat the test as a fixture and use @pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
xve*_*ges 10
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
Run Code Online (Sandbox Code Playgroud)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
@pytest.mark.slow
def test_func_slow():
pass
Run Code Online (Sandbox Code Playgroud)
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Run Code Online (Sandbox Code Playgroud)
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
Run Code Online (Sandbox Code Playgroud)
2) Mark your tests judiciously with the @pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
@pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Run Code Online (Sandbox Code Playgroud)
注意:将测试级别命名为“单位”和“集成”无关。您可以随意命名。您还可以有许多级别(例如每晚测试或性能测试)。
另外,我认为选项1是最好的方法,因为它不仅清楚地允许测试的分离,而且还可以为测试的含义和表示增加语义和清晰度。但是软件中没有“一刀切”的解决方案,您必须根据自己的具体情况来决定采用哪种方法。
HTH!
一个非常简单的解决方案是使用-k参数。您可以使用该-k参数取消选择某些测试。-k尝试将其参数与测试名称或标记的任何部分进行匹配not您可以使用来反转匹配(您也可以使用布尔运算符and和or)。因此,-k 'not slow'跳过名称中具有“ slow”,名称中具有“ slow”的标记或类/模块名称包含“ slow”的测试。
例如,给定此文件:
import pytest
def test_true():
assert True
@pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
Run Code Online (Sandbox Code Playgroud)
运行时:
pytest -k 'not slow'
Run Code Online (Sandbox Code Playgroud)
它输出类似:(请注意,两个失败的测试都与过滤器匹配时被跳过)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Run Code Online (Sandbox Code Playgroud)
由于热切的匹配,您可能需要执行一些操作,例如将所有单元测试放在一个名为目录的目录中unittest,然后将慢速单元标记为slow_unittest(以便偶然匹配名称恰好慢的测试)。然后,您可以使用它-k 'unittest and not slow_unittest'来匹配所有快速的单元测试。
| 归档时间: |
|
| 查看次数: |
1751 次 |
| 最近记录: |