青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

為生存而奔跑

   :: 首頁 :: 聯系 :: 聚合  :: 管理
  271 Posts :: 0 Stories :: 58 Comments :: 0 Trackbacks

留言簿(5)

我參與的團隊

搜索

  •  

積分與排名

  • 積分 - 331736
  • 排名 - 74

最新評論

閱讀排行榜

評論排行榜

(See the article on how to get a working copy of the repository.)

Introduction

In the previous article, we presented an approach for capturing similarity between words that was concerned with the syntactic similarity of two strings. Today, we are back to discuss another approach that is more concerned with the meaning of words. Semantic similarity is a confidence score that reflects the semantic relation between the meanings of two sentences. It is difficult to gain a high accuracy score because the exact semantic meanings are completely understood only in a particular context.

The goals of this paper are to:

  • Present to you some dictionary-based algorithms to capture the semantic similarity between two sentences, which is heavily based on the WordNet semantic dictionary.
  • Encourage you to work with the interesting topic of NLP.

Groundwork

Before we go any further, let us start with some brief introduction of the groundwork.

WordNet

WordNet is a lexical database which is available online, and provides a large repository of English lexical items. There is a multilingual WordNet for European languages which is structured in the same way as the English language WordNet.

WordNet was designed to establish the connections between four types of Parts of Speech (POS) - noun, verb, adjective, and adverb. The smallest unit in a WordNet is synset, which represents a specific meaning of a word. It includes the word, its explanation, and its synonyms. The specific meaning of one word under one type of POS is called a sense. Each sense of a word is in a different synset. Synsets are equivalent to senses = structures containing sets of terms with synonymous meanings. Each synset has a gloss that defines the concept it represents. For example, the words night, nighttime, and dark constitute a single synset that has the following gloss: the time after sunset and before sunrise while it is dark outside. Synsets are connected to one another through explicit semantic relations. Some of these relations (hypernym, hyponym for nouns, and hypernym and troponym for verbs) constitute is-a-kind-of (holonymy) and is-a-part-of (meronymy for nouns) hierarchies.

For example, tree is a kind of plant, tree is a hyponym of plant, and plant is a hypernym of tree. Analogously, trunk is a part of a tree, and we have trunk as a meronym of tree, and tree is a holonym of trunk. For one word and one type of POS, if there is more than one sense, WordNet organizes them in the order of the most frequently used to the least frequently used (Semcor).

WordNet.NET

Malcolm Crowe and Troy Simpson have developed an Open-Source .NET Framework library for WordNet, called WordNet.Net.

WordNet.Net was originally created by Malcolm Crowe, and it was known as a C# library for WordNet. It was created for WordNet 1.6, and stayed in its original form until after the release of WordNet 2.0 when Troy gained permission from Malcolm to use the code for freeware dictionary/thesaurus projects. Finally, after WordNet 2.1 was released, Troy released his version of Malcolm's library as an LGPL library known as WordNet.Net (with permission from Princeton and Malcolm Crowe, and in consultation with the Free Software Foundation), which was updated to work with the WordNet 2.1 database.

At the time of this writing, the WordNet.Net library is Open-Sourced for a short period of time, but it is expected to mature as more projects such as this spawn from the library's availability. Bug fixing and extensions to Malcolm's original library had been ongoing for over a year and a half prior to the release of the Open Source project. This is the project address of WordNet.Net.

Semantic similarity between sentences

Given two sentences, the measurement determines how similar the meaning of two sentences is. The higher the score, the more similar the meaning of the two sentences.

Here are the steps for computing semantic similarity between two sentences:

Tokenization

Each sentence is partitioned into a list of words, and we remove the stop words. Stop words are frequently occurring, insignificant words that appear in a database record, article, or a web page, etc.

Tagging part of speech (+)

This task is to identify the correct part of speech (POS - like noun, verb, pronoun, adverb ...) of each word in the sentence. The algorithm takes a sentence as input and a specified tag set (a finite list of POS tags). The output is a single best POS tag for each word. There are two types of taggers: the first one attaches syntactic roles to each word (subject, object, ..), and the second one attaches only functional roles (noun, verb, ...). There is a lot of work that has been done on POS tagging. The tagger can be classified as rule-based or stochastic. Rule-based taggers use hand written rules to disambiguate tag ambiguity. An example of rule-based tagging is Brill's tagger (Eric Brill algorithm). Stochastic taggers resolve tagging ambiguities by using a training corpus to compute the probability of a given word having a given tag in a given context. For example: tagger using the Hidden Markov Model, Maximize likelihood.

Brill Tagger sample for C#

There are two samples included for using the Brill Tagger from a C# application. The Brill Tagger tools, libraries, and samples can be found under the 3rd_Party_Tools_Data folder in the source repository.

One of the available ports is a VB.NET port by Steven Abbott of the original Brill Tagger. That port has been in turn ported to C# by Troy Simpson. The other is a port to VC++ by Paul Maddox. The C# test program for Paul Maddox's port uses a wrapper to read stdout directly from the command line application. The wrapper was created using a template by Mike Mayer.

See the respective test applications for working examples on using the Brill Tagger from C#. The port of Steven Abbott's work is fairly new, but after some testing, it is likely that Paul's VC++ port will be deprecated and replaced with Troy's C# port of Steven's VB.NET work.

Stemming word (+)

We use the Porter stemming algorithm. Porter stemming is a process of removing the common morphological and inflexional endings of words. It can be thought of as a lexicon finite state transducer with the following steps: Surface form -> split word into possible morphemes -> getting intermediate form -> map stems to categories and affixes to meaning -> underlying form. I.e.: foxes -> fox + s -> fox.

(+) Currently these works are not used in the semantic similarity project and will soon be integrated. To get their ideas, you can use the porterstermer class and the Brill Tagger sample in the repository.

Semantic relatedness and Word Sense Disambiguation (WSD)

As you are already aware, a word can have more than one sense that can lead to ambiguity. For example: the word "interest" has different meanings in the following two contexts:

  • "Interest" from a bank.
  • "Interest" in a subject.

WSD with original Micheal Lesk algorithm

Disambiguation is the process of finding out the most appropriate sense of a word that is used in a given sentence. The Lesk algorithm [13]uses dictionary definitions (gloss) to disambiguate a polysemous word in a sentence context. The major objective of its idea is to count the number of words that are shared between two glosses. The more overlapping the words, the more related the senses are.

To disambiguate a word, the gloss of each of its senses is compared to the glosses of every other word in a phrase. A word is assigned to the sense whose gloss shares the largest number of words in common with the glosses of the other words.

For example: In performing disambiguation for the "pine cone" phrasal, according to the Oxford Advanced Learner's Dictionary, the word "pine" has two senses:

  • sense 1: kind of evergreen tree with needle-shaped leaves,
  • sense 2: waste away through sorrow or illness.

The word "cone" has three senses:

  • sense 1: solid body which narrows to a point,
  • sense 2: something of this shape, whether solid or hollow,
  • sense 3: fruit of a certain evergreen tree.

By comparing each of the two gloss senses of the word "pine" with each of the three senses of the word "cone", it is found that the words "evergreen tree" occurs in one sense in each of the two words. So, these two senses are then declared to be the most appropriate senses when the words "pine" and "cone" are used together.

The original Lesk algorithm begins anew for each word, and does not utilize the senses it previously assigned. This greedy method does not always work effectively. Therefore, if the computational time is not critical, we should think of optimal sense combination by applying local search techniques such as Beam. The major idea behind such methods is to reduce the search space by applying several heuristic techniques. The Beam searcher limits its attention to only the k most promising candidates at each stage of the search process, where k is a predefined number.

The adapted Micheal Lesk algorithm

The original Lesk used the gloss of a word, and is restricted on the overlap scoring mechanism. In this section, we introduce an adapted version of the algorithm[16] with some improvements to overcome the limitations:

  • Access a dictionary with senses arranged in a hierarchical order (WordNet). This extended version uses not only the gloss/definition of the synset, but also considers the meaning of related words.
  • Apply a new scoring mechanism to measure gloss overlap that gives a more accurate score than the original Lesk bag of words counter.

To disambiguate each word in a sentence that has N words, we call each word to be disambiguated as a target word. The algorithm is described in the following steps:

  1. Select a context: optimizes computational time so if N is long, we will define K context around the target word (or k-nearest neighbor) as the sequence of words, starting K words to the left of the target word, and ending K words to the right. This will reduce the computational space that decreases the processing time. For example: If k is four, there will be two words to the left of the target word and two words to the right.
  2. For each word in the selected context, we look up and list all the possible senses of both POS (part of speech) noun and verb.
  3. For each sense of a word (WordSense), we list the following relations (example of pine and cone):
    • Its own gloss/definition that includes example texts that WordNet provides to the glosses.
    • The gloss of the synsets that are connected to it through the hypernym relations. If there is more than one hypernym for a word sense, then the glosses for each hypernym are concatenated into a single gloss string (*).
    • The gloss of the synsets that are connected to it through the hyponym relations (*).
    • The gloss of the synsets that are connected to it through the meronym relations (*).
    • The gloss of the synsets that are connected to it through the troponym relations (*).
    • (*) All of them are applied with the same rule.

  4. Combine all possible gloss pairs that are archived in the previous steps, and compute the relatedness by searching for overlap. The overall score is the sum of the scores for each relation pair.

    When computing the relatedness between two synsets s1 and s2, the pair hype-hype means the gloss for the hypernym of s1 is compared to the gloss for the hypernym of s2. The pair hype-hypo means that the gloss for the hypernym of s1 is compared to the gloss for the hyponym of s2.

     Collapse
    OverallScore(s1, s2)= Score(hype(s1)-hypo(s2)) +
        Score(gloss(s1)-hypo(s2)) + Score(hype(s1)-gloss(s2))...
        ( OverallScore(s1, s2) is also equivalent to OverallScore(s2, s1) ).

    In the example of "pine cone", there are three senses of pine and 6 senses of cone, so we can have a total of 18 possible combinations. One of them is the right one.

    To score the overlap, we use a new scoring mechanism that differentiates between N-single words and N-consecutive word overlaps and effectively treats each gloss as a bag of words. It is based on ZipF's Law, which says that the length of words is inversely proportional to their usage. The shortest words are those which are used more often, the longest ones are used less often.

    Measuring overlaps between two strings is reduced to solve the problem of finding the longest common sub-string with maximal consecutives. Each overlap which contains N consecutive words contributes N2 to the score of the gloss sense combination. For example: an overlap "ABC" has a score of 32=9, and two single overlaps "AB" and "C" has a score of 22 + 11=5.

  5. Once each combination has been scored, we pick up the sense that has the highest score to be the most appropriate sense for the target word in the selected context space. Hopefully, the output not only gives us the most appropriate sense, but also the associated part of speech for a word.
  6. If you intend to work with this topic, you should refer to the measurements of Hirst-St.Onge which is based on finding the lexical chains between the synsets.

Semantic similarity between two synsets

The above method allows us to find the most appropriate sense for each word in a sentence. To compute the similarity between two sentences, we base the semantic similarity between word senses. We capture semantic similarity between two word senses based on the path length similarity.

In WordNet, each part of speech words (nouns/verbs...) are organized into taxonomies where each node is a set of synonyms (synset) represented in one sense. If a word has more than one sense, it will appear in multiple synsets at various locations in the taxonomy. WordNet defines relations between synsets and relations between word senses. A relation between synsets is a semantic relation, and a relation between word senses is a lexical relation. The difference is that lexical relations are relations between members of two different synsets, but semantic relations are relations between two whole synsets. For instance:

  • Semantic relations are hypernym, hyponym, holonym, etc.
  • Lexical relations are antonym relation and the derived form relation.

Using the example, the antonym of the tenth sense of the noun light (light#n#10) in WordNet is the first sense of the noun dark (dark#n#1). The synset to which it belongs is {light#n#10, lighting#n#1}. Clearly, it makes sense that light#n#10 is an antonym of dark#n#1, but lighting#n#1 is not an antonym of dark#n#1; therefore, the antonym relation needs to be a lexical relation, not a semantic relation. Semantic similarity is a special case of semantic relatedness where we only consider the IS-A relationship.

The path length-based similarity measurement

To measure the semantic similarity between two synsets, we use hyponym/hypernym (or is-a relations). Due to the limitation of is-a hierarchies, we only work with "noun-noun", and "verb-verb" parts of speech.

A simple way to measure the semantic similarity between two synsets is to treat taxonomy as an undirected graph and measure the distance between them in WordNet. Said P. Resnik: "The shorter the path from one node to another, the more similar they are". Note that the path length is measured in nodes/vertices rather than in links/edges. The length of the path between two members of the same synset is 1 (synonym relations).

This figure shows an example of the hyponym taxonomy in WordNet used for path length similarity measurement:

In the above figure, we observe that the length between car and auto is 1, car and truck is 3, car and bicycle is 4, car and fork is 12.

A shared parent of two synsets is known as a sub-sumer. The least common sub-sumer (LCS) of two synsets is the sumer that does not have any children that are also the sub-sumer of two synsets. In other words, the LCS of two synsets is the most specific sub-sumer of the two synsets. Back to the above example, the LCS of {car, auto..} and {truck..} is {automotive, motor vehicle}, since the {automotive, motor vehicle} is more specific than the common sub-sumer {wheeled vehicle}.

The path length gives us a simple way to compute the relatedness distance between two word senses. There are some issues that need to be addressed:

  • It is possible for two synsets from the same part of speech to have no common sub-sumer. Since we did not join all the different top nodes of each part of the speech taxonomy, a path cannot always be found between the two synsets. But if a unique root node is being used, then a path will always exist between any two noun/verb synsets.
  • Note that multiple inheritance is allowed in WordNet; some synsets belong to more than one taxonomy. So, if there is more than one path between two synsets, the shortest such path is selected.
  • Lemmatization: when looking up a word in WN, the word is first lemmatized. Therefore, the distance between "book" and "books" is 0, since they are identical. But "Mice" and "mouse"?
  • This measurement only compares the word senses which have the same part of speech (POS). This means that we do not compare a noun and a verb because they are located in different taxonomies. We just consider the words that are nouns, verbs, or adjectives, respectively. With the omission of the POS tagger, we will use Jeff Martin'sLexicon class. When considering a word, we first check if it is a noun and, if so, we will treat it as a noun, and its verb or adjective will be disregarded. If it is not a noun, we will check if it is a verb...
  • Compound nouns like "travel agent" will be treated as two single words via tokenization.

Measuring similarity (MS1)

There are many proposals for measuring semantic similarity between two synsets: Wu & Palmer, Leacock and Chodorow, P. Resnik. In this work, we experimented with two simple measurements:

 Collapse
Sim(s, t) = 1/distance(s, t).
  • where distance is the path length from s to t using node counting.

Measuring similarity (MS2)

This formula was used in the previous article, which not only took into account the length of the path, but also the order of the sense involved in this path:

 Collapse
Sim(s, t) = SenseWeight(s)*SenseWeight(t)/PathLength
  • where s and t: denote the source and target words being compared.
  • SenseWeight: denotes a weight calculated according to the frequency of use of this sense and the total of frequency of use of all senses.
  • PathLength: denotes the length of the connection path from s to t.

Semantic similarity between two sentences

We will now describe the overall strategy to capture semantic similarity between two sentences. Given two sentences X and Y, we denote m to be length of X, n to be length of Y. The major steps can be described as follows:

  1. Tokenization.
  2. Perform word stemming.
  3. Perform part of speech tagging.
  4. Word sense disambiguation.
  5. Building a semantic similarity relative matrix R[m, n] of each pair of word senses, where R[i, j] is the semantic similarity between the most appropriate sense of word at position i of X and the most appropriate sense of word at position j of Y. Thus, R[i,j] is also the weight of the edge connecting from i to j. If a word does not exist in the dictionary, we use the edit-distance similarity instead and output a lower associated weight; for example: an abbreviation like CTO (Chief Technology Officer). Another solution for abbreviation is using abbreviation dictionary or abbreviation pattern recognition rules.
  6. We formulate the problem of capturing semantic similarity between sentences as the problem of computing a maximum total matching weight of a bipartite graph, where X and Y are two sets of disjoint nodes. We use the Hungarian method to solve this problem; please refer to our previous article on capturing similarity between two strings. If computational time is critical, we can use a simple fast heuristic method as follows. The pseudo code is:
  7.  Collapse
    ScoreSum <- 0;
        foreach (X[i] in X){
        bestCandidate <- -1;
        bestScore <- -maxInt;
        foreach (Y[j] in Y){
        if (Y[j] is still free && r[i, j] > bestScore){
        bestScore <- R[i, j];
        bestCandidate <- j;
        }
        }
        if (bestCandidate != -1){
        mark the bestCandidate as matched item.
        scoreSum <- scoreSum + bestScore;
        }
        }
  8. The match results from the previous step are combined into a single similarity value for two sentences. There are many strategies to acquire an overall combined similarity value for sets of matching pairs. In the previous section, we presented two simple formulas to compute semantic similarity between two word-senses. For each formula, we apply an appropriate strategy to compute the overall score:
    • Matching average:  where match(X, Y) are the matching word tokens between X and Y. This similarity is computed by dividing the sum of similarity values of all match candidates of both sentences X and Y by the total number of set tokens. An important point is that it is based on each of the individual similarity values, so that the overall similarity always reflects the influence of them. We apply this strategy with the MS1 formula.
    • Dice coefficient:  This strategy returns the ratio of the number of tokens that can be matched over the total of tokens. We apply this strategy with the MS2 formula. Hence, Dice will always return a higher value than Matching average, and it is thus more optimistic. In this strategy, we need to predefine a threshold to select the matching pairs that have values exceeding the given threshold.
    • (Cosine, Jarccard, Simpson coefficients will be considered in another particular situation).

    For example: Given two sentences X and Y, X and Y have lengths of 3 and 2, respectively. The bipartite matcher returns that X[1] has matched Y[1] with a score of 0.8, X[2] has matched Y[2] with a score of 0.7:

    • using Matching average, the overall score is : 2*(0.8 + 0.7) / (3 + 2) = 0.6.
    • using Dice with a threshold is 0.5, since both the matching pairs have scores greater than the threshold, so we have a total of 2 matching pairs.
    • The overall score is: 2*(1 + 1)/ (3+2) = 0.8.

Using the code

To run this code, you should install WordNet 2.1. Currently, the source code is stored at the Google Code repository. Please read the article: Using the WordNet.Net subversion repository before downloading the source code. This code is used to test the semantic similarity function:

 Collapse
void Test()
{
SemanticSimilarity semsim=new SemanticSimilarity() ;
float score=semsim.GetScore("Defense Ministry",
"Department of defence");
}

Future works

Time restrictions are a problem; whenever possible, we would like to:

  • Improve the usability of this experiment.
  • Extend the WSD algorithm with supervised learning with such methods as the Naive Bayesian Classifier model.
  • Disambiguate part of speech using probabilistic decision trees.

Conclusion

In this article, you have seen a simple approach to capture semantic similarity. This work might have many limitations since we are not a NLP research group. There are some things that need to improve. Once the final work is approved, we will move a copy to the CodeProject. This process may take a few working days.

There is a Perl Open Source package for semantic similarity from T. Pedersen and his team. Unfortunately, we do not know Perl; it would be very helpful if someone could migrate it to .NET. We'll stop here for now and hope that others might be inspired to work on WordNet.Net to develop this open source library to make it more useful.

Acknowledgements

  • Many thanks to: WordNet Princeton; M. Crowe, T. Pedersen - his team: S. Banerjee, J. Michelizzi, S. Patwardhan ; M. Lesk; E. Briller; M. Porter; P. Resnik; Hirst - S.T. Onge; T. Syeda-Mahmood, L. Yan, W. Urban; H. Do, E. Rahm; X. Su , J.A. Gulla; F. Guinchiglia, M. Yatskevich; A.V. Goldberg ...for using some results of their research papers.
  • We would like to thank M.A. Warin, J. Martin, C. Lemon, R. Northedge, S. Abbott, P. Maddox who have provided helpful documents, resources, and insightful comments during this work.

Articles worth reading

  1. A.V. Goldberg, R. Kennedy: An efficient cost scaling algorithm for the assignment problem, 1993.
  2. WordNet by Princeton.
  3. T. Syeda-Mahmood, L. Yan, W. Urban: Semantic search of schema repositories, 2005.
  4. T. Dao: An improvement on capturing similarity between strings, 2005 (*).
  5. T. Simpson: WordNet.NET, 2005 (*).
  6. T. Pedersen, S. Banerjee, S. Patwardhan: Maximize semantic relatedness to perform word sense disambiguation, 2005.
  7. H. Do, E. Rahm: COMA - A system for flexible combination of schema matching approach, 2002.
  8. P. Resnik: WordNet and class-based probabilities.
  9. J. Michelizzi: Master of science thesis, 2005.
  10. X. Su and J.A. Gulla: Semantic enrichment for ontolog mapping, 2004.
  11. F. Guinchiglia, M. Yatskevich: Element Level Semantic Matching, 2004.
  12. G. Hirst, D.St. Onge: Lexical chains as representation of context for the detection and correction of malapropisms.
  13. M. Lesk: Automatic sense disambiguation using machine readable dictionaries: how to tell a pine code from an ice cream cone, 1986.
  14. E. Brill: A simple rule-based part of speech tagger, 1993.
  15. S. Banerjee, T. Pedersen: An adapted Lesk algorithm for word sense disambiguation using Word-Net, 2002.
  16. S. Banerjee, T. Pedersen: Extended gloss overlaps as a measure of semantic relatedness, 2003.
  17. M. Mayer: Launching a process and displaying its standard output.

(*) is the co-author of this article.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

About the Authors

Troy Simpson


Member
Troy is employed as an analyst/programmer for an Australian university, and maintainsebswift.com and geekswithlightsabers.com as a hobby. A number of popular coding projects have been produced by ebswift.com, including WordNetCE and the WordNet.Net open source .Net API for WordNet.

See http://www.ebswift.com for Troy's projects.
Occupation: Software Developer
Location: Australia Australia

Thanh Dao


Member
I'm still alive...but temporarily moved to work on mobile & web stuffs(j2me/brew/php/flash...something not M$). things have just been very busy, and probably will continue...so don't have chance to maintain & respond. Hope will have time to try to write again, because many ideas with WPF &silver light are waiting. wish me luck Smile 

FYI: 
- MESHSimPack project(c# library for measuring similarity among concepts of the MESH ontology):
http://sourceforge.net/projects/meshsimpack.
Occupation: Software Developer
Location: Vietnam Vietnam
posted on 2010-03-19 16:47 baby-fly 閱讀(1683) 評論(0)  編輯 收藏 引用 所屬分類: Information Retrival / Data Mining
青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            欧美日韩国产首页在线观看| 狠狠久久亚洲欧美专区| 欧美激情中文不卡| 乱人伦精品视频在线观看| 欧美与欧洲交xxxx免费观看 | 国产精品久久久一本精品| 欧美日韩在线一二三| 欧美日韩在线观看一区二区| 国产精品www.| 黄色一区三区| 亚洲乱码国产乱码精品精可以看| 夜夜嗨av一区二区三区四季av | 欧美激情一区在线观看| 欧美另类变人与禽xxxxx| 国产精品激情偷乱一区二区∴| 国产日本欧美在线观看| 亚洲国产成人不卡| 亚洲一区二区三区激情| 久久国产精品久久久久久| 欧美99久久| 一区二区三区日韩欧美精品| 亚洲欧美激情一区| 免费成人小视频| 国产精品呻吟| 亚洲精品久久久久久一区二区| 亚洲自拍偷拍色片视频| 男人天堂欧美日韩| 亚洲一区二区三区精品在线 | 亚洲一区二区不卡免费| 久久久久高清| 欧美精品日韩精品| 黄网站色欧美视频| 午夜精品一区二区三区在线视 | 亚洲精品一级| 久久夜色精品国产噜噜av| 欧美性做爰毛片| 亚洲三级性片| 免费观看久久久4p| 亚洲欧洲av一区二区三区久久| 欧美成人激情视频| 狠狠色伊人亚洲综合成人| 亚洲欧美影音先锋| 99视频在线观看一区三区| 欧美1级日本1级| 精品成人在线视频| 欧美一级久久久久久久大片| 亚洲精品视频免费观看| 免费h精品视频在线播放| 一区二区三区精品在线| 亚洲综合欧美| 免费一级欧美片在线观看| 一区二区不卡在线视频 午夜欧美不卡在 | 欧美色欧美亚洲另类二区| 在线观看国产成人av片| 久久久999| 先锋亚洲精品| 国产偷国产偷亚洲高清97cao| 亚洲综合社区| 亚洲一区二区在线免费观看| 国产精品户外野外| 亚洲女人天堂成人av在线| 99re8这里有精品热视频免费 | 欧美尤物巨大精品爽| 国产日本欧美一区二区三区在线| 欧美一区二区三区四区在线| 亚洲女人天堂成人av在线| 国产精品日韩精品欧美在线| 午夜在线观看欧美| 亚洲欧美在线播放| 国产视频亚洲精品| 麻豆精品在线观看| 蜜臀久久99精品久久久画质超高清| 亚洲精品国精品久久99热一| 亚洲美女性视频| 国产精品日本一区二区| 久久久久9999亚洲精品| 久久深夜福利| av成人国产| 亚洲欧美中文在线视频| 亚洲电影在线看| 亚洲六月丁香色婷婷综合久久| 国产精品av一区二区| 久久久久国产精品厨房| 麻豆久久婷婷| 亚洲免费视频一区二区| 久久精品99国产精品日本| 亚洲精品四区| 欧美亚洲综合网| 亚洲人成网站影音先锋播放| 一本色道久久综合狠狠躁的推荐| 国产日韩欧美精品一区| 亚洲高清三级视频| 欧美日韩国产成人高清视频| 性视频1819p久久| 免费观看不卡av| 亚洲影院色无极综合| 久久久久久精| 午夜免费在线观看精品视频| 久久综合色88| 欧美一区二区在线视频| 欧美大片免费久久精品三p| 性欧美暴力猛交69hd| 欧美成人dvd在线视频| 久久久久久亚洲精品中文字幕 | 亚洲人体影院| 国产一二精品视频| 亚洲三级电影在线观看| 国内成人自拍视频| 夜夜嗨av一区二区三区四区| 136国产福利精品导航网址| 中文欧美日韩| 亚洲免费观看在线观看| 欧美在线视频免费观看| 一区二区欧美日韩| 久久久青草青青国产亚洲免观| 亚洲免费视频中文字幕| 欧美国产亚洲另类动漫| 久热精品在线视频| 国产精品综合不卡av| 日韩一区二区免费高清| 亚洲激情网站| 久久综合一区二区| 免费高清在线一区| 国产亚洲精品aa午夜观看| 一区二区欧美日韩视频| 亚洲伦理网站| 欧美国产欧美亚洲国产日韩mv天天看完整 | 亚洲精品久久久久久久久| 影音先锋中文字幕一区| 欧美中文字幕在线视频| 久久精品国产亚洲一区二区三区 | 国产精品男女猛烈高潮激情| 亚洲精品美女在线观看| 亚洲人成啪啪网站| 欧美a级片一区| 亚洲电影免费在线观看| 亚洲国产精品一区制服丝袜| 久久婷婷丁香| 欧美大片免费观看| 亚洲人成绝费网站色www| 欧美成人一区二免费视频软件| 亚洲福利视频二区| 日韩视频亚洲视频| 欧美成人精品| 日韩亚洲视频| 欧美亚洲一区| 红桃视频亚洲| 欧美成人精品不卡视频在线观看 | 久久精品最新地址| 国产一区二区久久久| 久久久久久综合| 亚洲国产精品一区二区第一页| 日韩五码在线| 国产精品裸体一区二区三区| 香蕉久久a毛片| 蜜臀av在线播放一区二区三区| 亚洲第一精品夜夜躁人人爽| 欧美激情一区二区三级高清视频 | 亚洲女女女同性video| 久久夜色精品国产噜噜av| 欧美日韩精品是欧美日韩精品| 亚洲日本欧美在线| 欧美日本在线一区| 亚洲一区二区三区在线观看视频| 一区二区三区四区五区精品视频| 欧美日韩午夜在线| 性做久久久久久久免费看| 久久九九免费| 日韩视频在线观看| 国产精品护士白丝一区av| 亚洲三级色网| 欧美在线一二三区| 亚洲裸体俱乐部裸体舞表演av| 国产精品国产三级欧美二区| 久久国产精品99精品国产| 亚洲欧洲一区二区三区久久| 久久不见久久见免费视频1| 亚洲欧洲日韩在线| 国产日韩精品一区二区三区| 欧美激情视频网站| 性久久久久久久久| 日韩午夜在线电影| 免费视频亚洲| 久久国产精品久久w女人spa| 日韩一区二区电影网| 激情欧美丁香| 国产欧美日韩视频| 欧美日韩一级大片网址| 久久亚洲高清| 午夜在线电影亚洲一区| 亚洲伦伦在线| 欧美激情一区| 免费短视频成人日韩| 欧美中文在线视频| 亚洲一区在线看| 一区二区三区 在线观看视| 在线免费精品视频| 国产综合久久| 国语精品一区| 好看不卡的中文字幕|