青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

子彈 の VISIONS

NEVER back down ~~

C++博客 首頁 新隨筆 聯系 聚合 管理
  112 Posts :: 34 Stories :: 99 Comments :: 0 Trackbacks

Software quality

From Wikipedia, the free encyclopedia

Jump to: navigation, search
Software development process
Activities and steps
Requirements · Architecture
Design · Implementation
Testing · Deployment
Models
Agile · Cleanroom · Iterative · RAD
RUP · Spiral · Waterfall · XP · Scrum
Supporting disciplines
Configuration management
Documentation
Quality assurance (SQA)
Project management
User experience design
This box: view  talk  edit

In the context of software engineering, software quality measures how well software is designed (quality of design), and how well the software conforms to that design (quality of conformance),[1] although there are several different definitions.

For their certification in software quality engineering (CSQE), the American Society for Quality (ASQ) lists seven major topic areas in the 2008 CSQE body of knowledge.

  • General [quality] knowledge
  • Software quality management
  • Systems and software engineering processes
  • Project management
  • Software metrics and analysis
  • Software verification and validation (V&V)
  • Software configuration management

Whereas quality of conformance is concerned with implementation (see Software Quality Assurance), quality of design measures how valid the design and requirements are in creating a worthwhile product.[2]

Contents

[hide]

[edit] Definition

One of the problems with Software Quality is that "everyone feels they understand it."[3] In addition to the definition above by Dr. Roger S. Pressman, other software engineering experts have given several definitions.

A definition in Steve McConnell's Code Complete similarly divides software into two pieces: internal and external quality characteristics. External quality characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not.[4]

Another definition by Dr. Tom DeMarco says "a product's quality is a function of how much it changes the world for the better."[5] This can be interpreted as meaning that user satisfaction is more important than anything in determining software quality.[1]

Another definition, coined by Gerald Weinberg in Quality Software Management: Systems Thinking, is "Quality is value to some person." This definition stresses that quality is inherently subjective - different people will experience the quality of the same software very differently. One strength of this definition is the questions it invites software teams to consider, such as "Who are the people we want to value our software?" and "What will be valuable to them?".

[edit] History

[edit] Software product quality

[edit] Source code quality

To a computer, there is no real concept of "well-written" source code. However, to a human, the way a program is written can have some important consequences for the human maintainers. Many source code programming style guides, which stress readability and some language-specific conventions are aimed at the maintenance of the software source code, which involves debugging and updating. Other issues also come into considering whether code is well written, such as the logical structuring of the code into more manageable sections.

Methods to improve the quality: refactoring.

[edit] Software reliability

Software reliability is an important facet of software quality. It is defined as "the probability of failure-free operation of a computer program in a specified environment for a specified time".[6]

One of reliability's distinguishing characteristics is that it is objective, measurable, and can be estimated, whereas much of software quality consists of subjective criteria.[7] This distinction is especially important in the discipline of Software Quality Assurance. These measured criteria are typically called software metrics.

[edit] History

With software embedded into many devices today, software failure has caused more than inconvenience. Software errors have even caused human fatalities. The causes have ranged from poorly designed user interfaces to direct programming errors. An example of a programming error that lead to multiple deaths is discussed in Dr. Leveson's paper [1] (PDF). This has resulted in requirements for development of some types software. In the United States, both the Food and Drug Administration (FDA) and Federal Aviation Administration (FAA) have requirements for software development.

[edit] The goal of reliability

The need for a means to objectively determine software quality comes from the desire to apply the techniques of contemporary engineering fields to the development of software. That desire is a result of the common observation, by both lay-persons and specialists, that computer software does not work the way it ought to. In other words, software is seen to exhibit undesirable behavour, up to and including outright failure, with consequences for the data which is processed, the machinery on which the software runs, and by extension the people and materials which those machines might negatively affect. The more critical the application of the software to economic and production processes, or to life-sustaining systems, the more important is the need to assess the software's reliability.

Regardless of the criticality of any single software application, it is also more and more frequently observed that software has penetrated deeply into most every aspect of modern life through the technology we use. It is only expected that this infiltration will continue, along with an accompanying dependency on the software by the systems which maintain our society. As software becomes more and more crucial to the operation of the systems on which we depend, the argument goes, it only follows that the software should offer a concomitant level of dependability. In other words, the software should behave in the way it is intended, or even better, in the way it should.

[edit] The challenge of reliability

The circular logic of the preceding sentence is not accidental — it is meant to illustrate a fundamental problem in the issue of measuring software reliability, which is the difficulty of determining, in advance, exactly how the software is intended to operate. The problem seems to stem from a common conceptual error in the consideration of software, which is that software in some sense takes on a role which would otherwise be filled by a human being. This is a problem on two levels. Firstly, most modern software performs work which a human could never perform, especially at the high level of reliability that is often expected from software in comparison to humans. Secondly, software is fundamentally incapable of most of the mental capabilities of humans which separate them from mere mechanisms: qualities such as adaptability, general-purpose knowledge, a sense of conceptual and functional context, and common sense.

Nevertheless, most software programs could safely be considered to have a particular, even singular purpose. If the possibility can be allowed that said purpose can be well or even completely defined, it should present a means for at least considering objectively whether the software is, in fact, reliable, by comparing the expected outcome to the actual outcome of running the software in a given environment, with given data. Unfortunately, it is still not known whether it is possible to exhaustively determine either the expected outcome or the actual outcome of the entire set of possible environment and input data to a given program, without which it is probably impossible to determine the reliability with any certainty.

However, various attempts are in the works to attempt to rein in the vastness of the space of programs and theoretical descriptions of programs. Such attempts to improve software reliability can be applied at different stages of a development, in the case of real software. These stages principally include: requirements, design, programming, testing, and run time evaluation. The study of theoretical software reliability is predominantly concerned with the concept of correctness, a mathematical field of computer science which is an outgrowth of language and automata theory.

[edit] Reliability in program development

[edit] Requirements

A program cannot be expected to work as desired if the developers of the program do not, in fact, know the

Whether a software projects. In situ with the formalization effort is an attempt to help inform non-specialists, particularly non-programmers, who commission software projects without sufficient knowledge of what computer software is in fact capable. Communicating this knowledge is made more difficult by the fact that, as hinted above, even programmers cannot always know in advance what is actually possible for software in advance of trying.

[edit] Design

While requirements are meant to specify what a program should do, design is meant, at least at a high level, to specify how the program should do it. The usefulness of design is also questioned by some, but those who look to formalize the process of ensuring reliability often offer good software design processes as the most significant means to accomplish it. Software design usually involves the use of more abstract and general means of specifying the parts of the software and what they do. As such, it can be seen as a way to break a large program down into many smaller programs, such that those smaller pieces together do the work of the whole program.

The purposes of high-level design are as follows. It separates what are considered to be problems of architecture, or overall program concept and structure, from problems of actual coding, which solve problems of actual data processing. It applies additional constraints to the development process by narrowing the scope of the smaller software components, and thereby — it is hoped — removing variables which could increase the likelihood of programming errors. It provides a program template, including the specification of interfaces, which can be shared by different teams of developers working on disparate parts, such that they can know in advance how each of their contributions will interface with those of the other teams. Finally, and perhaps most controversially, it specifies the program independently of the implementation language or languages, thereby removing language-specific biases and limitations which would otherwise creep into the design, perhaps unwittingly on the part of programmer-designers.

[edit] Programming

The history of computer programming language development can often be best understood in the light of attempts to master the complexity of computer programs, which otherwise becomes more difficult to understand in proportion (perhaps exponentially) to the size of the programs. (Another way of looking at the evolution of programming languages is simply as a way of getting the computer to do more and more of the work, but this may be a different way of saying the same thing.) Lack of understanding of a program's overall structure and functionality is a sure way to fail to detect errors in the program, and thus the use of better languages should, conversely, reduce the number of errors by enabling a better understanding.

Improvements in languages tend to provide incrementally what software design has attempted to do in one fell swoop: consider the software at ever greater levels of abstraction. Such inventions as statement, sub-routine, file, class, template, library, component and more have allowed the arrangement of a program's parts to be specified using abstractions such as layers, hierarchies and modules, which provide structure at different granularities, so that from any point of view the program's code can be imagined to be orderly and comprehensible.

In addition, improvements in languages have enabled more exact control over the shape and use of data elements, culminating in the abstract data type. These data types can be specified to a very fine degree, including how and when they are accessed, and even the state of the data before and after it is accessed..

[edit] Testing

Main article: Software Testing

Software testing, when done correctly, can increase overall software quality of conformance by testing that the product conforms to its requirements. Testing includes, but is not limited to:

  1. Unit Testing
  2. Functional Testing
  3. Performance Testing
  4. Fail over Testing
  5. Usability Testing

A number of agile methodologies use testing early in the development cycle to ensure quality in their products. For example, the test-driven development practice, where tests are written before the code they will test, is used in Extreme Programming to ensure quality.

[edit] Run time

Run time reliability determinations are similar to tests, but go beyond simple confirmation of behavior to the evaluation of qualities such as performance and interoperability with other code or particular hardware configurations.

[edit] Software Quality Factors

A software quality factor is a non-functional requirement for a software program which is not called up by the customer's contract, but is nevertheless desirable and enhances the quality of the software program.

Some software quality factors are:

Understandability
The purpose of the software product is clear. This goes further than just a statement of purpose - all of the design and user documentation must be clearly written so that it is easily understandable. Obviously, the user context must be taken into account, e.g. if the software product is to be used by software engineers it is not required to be understandable to lay users.
Completeness
All parts of the software product are present and each of its parts are fully developed. For example, if the code calls a sub-routine from an external library, the software package must provide reference to that library and all required parameters must be passed. All required input data must be available.
Conciseness
No excessive information is present. This is important where memory capacity is limited, and it is important to reduce lines of code to a minimum. It can be improved by replacing repeated functionality by one sub-routine or function which achieves that functionality. This quality factor also applies to documentation.
Portability
The software product can be easily operated or made to operate on multiple computer configurations. This can be between multiple hardware configurations (such as server hardware and personal computers), multiple operating systems (e.g. Microsoft Windows and Linux-based operating systems), or both.
Consistency
The software contains uniform notation, symbology and terminology within itself.
Maintainability
The product facilitates updating to satisfy new requirements. The software product that is maintainable is simple, well-documented, and should have spare capacity for processor memory usage.
Testability
The software product facilitates the establishment of acceptance criteria and supports evaluation of its performance. Such a characteristic must be built-in during the design phase if the product is to be easily testable, since a complex design leads to poor testability.
Usability
The product is convenient and practicable to use. The component of the software which has most impact on this is the user interface (UI), which for best usability is usually graphical.
Reliability
The software can be expected to perform its intended functions satisfactorily over a period of time. Reliability also encompasses environmental considerations in that the product is required to perform correctly in whatever conditions it is operated in; this is sometimes termed robustness.
Structure
The software possesses a definite pattern of organization in its constituent parts.
Efficiency
The software product fulfills its purpose without wasting resources, e.g. memory or CPU cycles.
Security
The product is able to protect data against unauthorized access and to withstand malicious interference with its operations. Besides the presence of appropriate security mechanisms such as authentication, access control and encryption, security also implies reliability in the face of malicious, intelligent and adaptive attackers.

[edit] Measurement of software quality factors

There are varied perspectives within the field on measurement. There are a great many measures that are valued by some professionals, or in some contexts, that are decried as harmful by others. Some believe that quantitative measures of software quality are essential. Others believe that contexts where quantitative measures are useful are quite rare, and so prefer qualitative measures. Several authorities in the field of software testing have written about these difficulties, including Dr. Cem Kaner [2](PDF) and Douglas Hoffman [3](PDF).

One example of a popular metric is the number of faults encountered in the software. Software that contains few faults is considered by some to have higher quality than software that contains many faults. Questions that can help determine the usefulness of this metric in a particular context include:

  1. What constitutes 'many faults'? Does this differ depending on the purpose of the software (e.g. blogging software v. navigational software)? Does this take into account the size and complexity of the software?
  2. Does this account for the importance of the bugs (and the importance to the stakeholders of the people those bugs bug)? Does one try to weight this measure by the severity of the fault, or the incidence of users it effects? If so, how? And if not, how does one know that 100 faults discovered is better than 1000?
  3. If the count of faults being discovered is shrinking, how does one know what this means? For example, does it mean that the product is now of higher quality that it was before? Or that this is a smaller/less ambitious change than before? Or that less tester-hours have gone into the project than before? Or that this project was tested by less skilled testers than before? Or that the team has discovered that less faults reported is in their interest?

This last question points to an especially difficult one to manage. All software quality metrics are in some sense measures of human behavior, since humans create software[4](PDF). If a team discovers that they will benefit from a drop in the number of reported bugs, there is a strong tendency for the team to start reporting less defects. That may mean that email begins to circumvent the bug tracking system, or that four or five bugs get lumped into one bug report, or that testers learn not to report minor annoyances. The difficulty is measuring what is intended to be measured, without creating incentives for software programmers and testers to consciously or unconsciously "game" the measurements.

Software Quality Factors cannot be measured because of their vague description. It is necessary to find measures, or metrics, which can be used to quantify them as non-functional requirements. For example, reliability is a software quality factor, but cannot be evaluated in its own right. However there are related attributes to reliability, which can indeed be measured. Such attributes are mean time to failure, rate of failure occurrence, availability of the system. Similarly, an attribute of portability is the number of target dependent statements in a program.

A scheme which could be used for evaluating software quality factors is given below. For every characteristic, there are a set of questions which are relevant to that characteristic. Some type of scoring formula could be developed based on the answers to these questions, from which a measure of the characteristic may be obtained.

[edit] Understandability

Are variable names descriptive of the physical or functional property represented? Do uniquely recognizable functions contain adequate comments so that their purpose is clear? Are deviations from forward logical flow adequately commented? Are all elements of an array functionally related?

[edit] Conciseness

Is all code reachable? Is any code redundant? How many statements within loops could be placed outside the loop, thus reducing computation time? Are branch decisions too complex?

[edit] Portability

  • Does the program depend upon system or library routines unique to a particular installation? Have machine-dependent statements been flagged and commented? Has dependency on internal bit representation of alphanumeric or special characters been avoided?
  • The effort required to transfer the program from one hardware/software system environment to another.

[edit] Consistency

Is one variable name used to represent different physical entities in the program? Does the program contain only one representation for physical or mathematical constants? Are functionally similar arithmetic expressions similarly constructed? Is a consistent scheme for indentation used?

[edit] Maintainability

Maintainability of a software is highly dependent on the process used to develop it. Assessing the quality of the software maintenance process is done using a software maintenance maturity model S3M.

Assessing the maintainability of the software product is done by assessing four different perspectives:its ANALYZABILITY, its CHANGEABILITY, its STABILITY afer a change, and its TESTABILITY.

[edit] Testability

Are complex structures employed in the code? Does the detailed design contain clear pseudo-code? Is the pseudo-code at a higher level of abstraction than the code? If tasking is used in concurrent designs, are schemes available for providing adequate test cases?

[edit] Usability

Is a GUI used? Is there adequate on-line help? Is a user manual provided? Are meaningful error messages provided? Effort required to learn, operate, prepare input, and interpret output of a program.

[edit] Reliability

  • Are loop indexes range tested? Is input data checked for range errors? Is divide-by-zero avoided? Is exception handling provided?
  • The extent to which a program can be expected to perform its intended function with required precision.

[edit] Structures

Is a block-structured programming language used? Are modules limited in size? Have the rules for transfer of control between modules been established and followed?

[edit] Efficiency

  • Have functions been optimized for speed? Have repeatedly used blocks of code been formed into sub-routines? Checked for any memory leak, overflow?
  • The amount of computing resources and code required by a program to perform its function.

[edit] Security

Does the software protect itself and its data against unauthorized access and use? Does it allow its operator to enforce security policies? Are appropriate security mechanisms in place? Are those security mechanisms implemented correctly? Can the software withstand attacks that must be expected in its intended environment? Is the software free of errors that would make it possible to circumvent its security mechanisms? Does the architecture limit the impact of yet unknown errors? security testing is any develop system is about finding loops and weaknesses of the system.

[edit] User's perspective

In addition to the technical qualities of software, the end user's experience also determines the quality of software. This aspect of software quality is called usability. It is hard to quantify the usability of a given software product. Some important questions to be asked are:

  • Is the user interface intuitive?
  • Is it easy to perform easy operations?
  • Is it feasible to perform difficult operations?
  • Does the software give sensible error messages?
  • Do widgets behave as expected?
  • Is the software well documented?
  • Is the user interface self-explanatory/ self-documenting?
  • Is the user interface responsive or too slow?

Also, the availability of (free or paid) support may determine the usability of the software.

[edit] See also

[edit] Bibliography

  • International Organization for Standardization. Software Engineering — Product Quality — Part 1: Quality Model. ISO, Geneva, Switzerland, 2001. ISO/IEC 9126-1:2001(E).
  • Diomidis Spinellis. Code Quality: The Open Source Perspective. Addison Wesley, Boston, MA, 2006.
  • Ho-Won Jung, Seung-Gweon Kim, and Chang-Sin Chung. Measuring software product quality: A survey of ISO/IEC 9126. IEEE Software, 21(5):10–13, September/October 2004.
  • Stephen H. Kan. Metrics and Models in Software Quality Engineering. Addison-Wesley, Boston, MA, second edition, 2002.
  • Robert L. Glass. Building Quality Software. Prentice Hall, Upper Saddle River, NJ, 1992.

[edit] References

  1. ^ a b Pressman, Roger S. Software Engineering: A Practitioner's Approach. Sixth Edition, International, p 746. McGraw-Hill Education 2005.
  2. ^ Pressman, Roger S. Software Engineering: A Practitioner's Approach. Sixth Edition, International, p 388. McGraw-Hill Education 2005.
  3. ^ Crosby, P., Quality is Free, McGraw-Hill, 1979
  4. ^ McConnell, Steve. Code Complete First Ed, p. 558. Microsoft Press 1993
  5. ^ DeMarco, T., "Management Can Make Quality (If)possible," Cutter IT Summit, Boston, April 1999
  6. ^ Musa, J.D, A. Iannino, and K. Okumoto, Engineering and Managing Software with Reliability Measures, McGraw-Hill, 1987
  7. ^ Pressman, Roger S. Software Engineering: A Practitioner's Approach, Sixth Edition International, McGraw-Hill International, 2005, p 762.
Retrieved from "
青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
      <noscript id="pjuwb"></noscript>
            <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
              <dd id="pjuwb"></dd>
              <abbr id="pjuwb"></abbr>
              亚洲视频1区2区| 国内一区二区三区| 一级日韩一区在线观看| 亚洲高清一区二| 欧美日本免费一区二区三区| 亚洲线精品一区二区三区八戒| 一区二区激情视频| 国产欧美日韩精品专区| 久热精品视频| 欧美人与性禽动交情品| 亚洲欧美国产高清| 久久精品一级爱片| 亚洲理论电影网| 亚洲欧美日韩国产成人| 亚洲国产经典视频| 一本久久综合亚洲鲁鲁五月天| 国产欧美一区二区三区在线看蜜臀 | 欧美成人自拍| 亚洲欧美日韩中文在线制服| 久久精品欧美日韩精品| 亚洲精品一区在线观看香蕉| 在线中文字幕不卡| 极品少妇一区二区三区精品视频| 亚洲高清视频中文字幕| 国产女人精品视频| 欧美激情黄色片| 国产伦精品一区二区三| 亚洲第一级黄色片| 国产精品永久免费视频| 亚洲国产aⅴ天堂久久| 国产精品免费一区二区三区在线观看| 亚洲人成小说网站色在线| 久久aⅴ国产欧美74aaa| 亚洲视频axxx| 亚洲国产婷婷| 欧美在线观看天堂一区二区三区| 亚洲乱码一区二区| 久久精品91久久久久久再现| 一级成人国产| 欧美成人在线免费观看| 久久久国产精品一区二区三区| 欧美三级视频在线| 亚洲国产天堂久久国产91| 国产一区二区三区不卡在线观看| 亚洲精品在线视频| 亚洲国产日韩欧美| 久久精品在线播放| 久久一区二区三区av| 国产精品国产三级欧美二区| 亚洲第一伊人| 亚洲国产高清在线观看视频| 欧美在线电影| 久久久精品日韩欧美| 国产情人节一区| 亚洲一区在线视频| 午夜精品久久久久久久99水蜜桃| 欧美日韩1区2区3区| 亚洲国产清纯| 亚洲精品视频一区| 欧美成人一区在线| 亚洲茄子视频| 亚洲天堂成人| 国产精品永久在线| 亚洲男女自偷自拍| 欧美有码在线视频| 国产日韩欧美在线看| 香蕉国产精品偷在线观看不卡| 欧美一区二区日韩| 国产婷婷一区二区| 性欧美大战久久久久久久免费观看| 久久超碰97中文字幕| 国产亚洲欧美一区在线观看| 午夜精品久久久久久久99水蜜桃 | 欧美调教视频| 一区二区日韩欧美| 欧美在线观看一区二区| 国产在线观看一区| 久久综合久色欧美综合狠狠| 欧美风情在线观看| 夜夜嗨av一区二区三区中文字幕| 欧美日韩一区二区三区| 亚洲天堂偷拍| 久久全国免费视频| 99伊人成综合| 国产精品免费一区二区三区观看| 久久成人精品一区二区三区| 嫩草影视亚洲| 亚洲网站在线看| 国产视频一区欧美| 欧美精品久久久久久| 亚洲影院色无极综合| 欧美成人精品在线播放| 亚洲无限乱码一二三四麻| 国产伦精品一区二区三区照片91 | 欧美中文字幕不卡| 亚洲国产三级网| 午夜精品福利视频| 在线观看日韩一区| 欧美在线观看一区二区三区| 国语自产精品视频在线看抢先版结局| 麻豆国产va免费精品高清在线| 亚洲精品女av网站| 亚洲欧美久久久久一区二区三区| 国产一区自拍视频| 欧美日本不卡高清| 久久国产精品一区二区三区四区 | 亚洲第一页在线| 欧美在线1区| 在线午夜精品自拍| 激情综合久久| 国产啪精品视频| 欧美日韩亚洲成人| 免费亚洲一区二区| 久久超碰97中文字幕| 国产精品99久久久久久有的能看| 农夫在线精品视频免费观看| 欧美一区2区三区4区公司二百| 日韩视频在线免费观看| 精品成人国产| 国产欧美日本一区视频| 欧美日韩免费| 欧美国产日本在线| 另类图片国产| 久久精品国产一区二区三区| 亚洲一区视频| av成人免费| 亚洲精品欧美日韩| 亚洲国产导航| 欧美激情va永久在线播放| 久热成人在线视频| 狂野欧美激情性xxxx| 久久大香伊蕉在人线观看热2| 亚洲男人av电影| 亚洲视频1区2区| 亚洲一区中文| 亚洲欧美日本视频在线观看| 一本色道久久综合亚洲精品按摩 | 一区二区三欧美| 一区二区三区视频在线播放| 日韩写真在线| 99国产精品99久久久久久| 亚洲精品中文字幕有码专区| 亚洲人人精品| 一本久久a久久精品亚洲| 亚洲精品久久视频| 亚洲免费精彩视频| 在线亚洲精品福利网址导航| 中文精品视频| 欧美与欧洲交xxxx免费观看 | 久久精品99久久香蕉国产色戒| 亚洲欧美第一页| 欧美一级视频一区二区| 欧美一区二区黄色| 美日韩精品免费观看视频| 鲁大师影院一区二区三区| 欧美成人69av| 亚洲精品视频中文字幕| 亚洲作爱视频| 欧美一区精品| 欧美精品黄色| 国产精品热久久久久夜色精品三区| 国产精品一区二区女厕厕| 国产综合婷婷| 亚洲精品久久久久久下一站 | 欧美激情国产日韩精品一区18| 一区二区三区高清在线| 国产精品美女| 经典三级久久| 99国内精品久久| 欧美综合第一页| 欧美好骚综合网| 亚洲天堂av在线免费| 久久久精品网| 欧美日韩免费一区| 国内一区二区三区| 亚洲一本大道在线| 可以免费看不卡的av网站| 9国产精品视频| 久久综合狠狠| 国产精品视频一区二区高潮| 在线精品国精品国产尤物884a| 亚洲四色影视在线观看| 美女精品国产| 亚洲一区二区不卡免费| 欧美fxxxxxx另类| 国产日韩精品视频一区| 99成人在线| 欧美黄色一区| 久久成人免费网| 国产精品日韩欧美大师| 亚洲激情偷拍| 久久手机免费观看| 亚洲视频免费观看| 欧美国产日本在线| 亚洲电影av| 老巨人导航500精品| 亚洲专区一区二区三区| 欧美日韩国产首页在线观看| 在线观看中文字幕亚洲| 久久―日本道色综合久久|