At the same time, I have published the source code on Github, for a little more making available to those who want to help with the changes and improvements. I prefer that the changes are sent as pull requests, but it is obviously good to send changes directly to me in the mail if you do not want to advertise his name. For those who want to create other applications based on the IP addresses used by the Creeper there is now a simple JSON publication of the addresses. What is needed, for example, tools to verify and manage changes to the addresses. But that only verify the IP addresses through a little dcu cleverer dcu web interface than what is available now would be very good to have. Recent Comments JB on MAC addresses are personal data? pawal on two-factor authentication is easily pawal on two-factor authentication is easily Frederick on two-factor authentication is easily Jonas on many alternative software for DNS Categories Apple (1) Books (2) Censorship (1) Creeper (26) d free (1) DNS (4) DRM (3) Events (1) creeper (6) Hacks (9) Hardware (20) Humor (3) Identity (6) The IETF (2) Privacy (30) IPv6 (2) journalists (17), Linux (22) Audio (10) Microsoft dcu (5) Minecraft (1) Software (15) MP3 (4) Music (22) OldSchool (8 ) Politics (23) Programming (6) Security (24) Gossip dcu (5) Spam (5) Statistics (7) Waffle (10) Competition (1) Copyright (15) Videos (1) Virtualization (1) Visualization (6) VoIP (1) Web 2.0 (5) Archives dcu February 2015 (1) January 2015 (1) September 2014 (2) June 2014 (1) May 2014 (1) April 2014 (2) September 2011 (1) July 2011 (1) April 2011 (1) March 2011 (2 ) January 2011 (1) December 2010 (2) October 2010 (1) September 2010 (1) July 2010 (1) March 2010 (1) February 2010 (1) January 2010 (1) October 2009 (1) July 2009 (1 ) June 2009 (3) May 2009 (1) March 2009 (1) February 2009 (1) January 2009 (3) December 2008 (2) November 2008 (3) October 2008 (3) September 2008 (3) August 2008 (1 ) July 2008 (5) June 2008 (3) May 2008 (2) April 2008 (2) March 2008 (3) February 2008 (1) January 2008 (1) December dcu 2007 (1) November 2007 (1) October dcu 2007 (1 ) September dcu 2007 (5) August 2007 (3) July 2007 (1) June 2007 (4) May 2007 (5) April 2007 (3) March 2007 (4) February 2007 (4) January 2007 (2) December 2006 (3 ) November 2006 (2) October 2006 (3) September 2006 (1) August 2006 (3) July 2006 (3) June 2006 (2) May 2006 (7) April 2006 (5) March 2006 (3) February 2006 (10 ) January 2006 (6) December 2005 (12)
Natural language understanding issues to be converted to machine learning problem, the first step is certainly looking for a way to mathematical symbols. NLP is the most intuitive, school shooting is by far the most commonly used word representation is One-hot Representation, this way each word is represented as a long vector. This dimension school shooting is the word vector table size, most of them, is 0, there is only one dimension of the value of 1, this dimension represents the current word. For chestnut, "microphone" is expressed as [0,001,000,000,000,000 ...] "Mike" school shooting is expressed as [0,000,000,010,000,000 ...] Each word is a vast sea 0 1. This One-hot Representation sparse if stored, will be very simple: that is assigned to each word a numeric ID. For example, the earlier example, the microphone school shooting recorded as 3, Mike recorded as 8 (assuming starts counting from 0). If you are programming, then use the Hash Table is assigned school shooting to each word a number on it. So succinct representation with the maximum entropy, SVM, CRF, etc. algorithms have done a good mainstream task NLP field. Of course, this representation there is an important issue is "vocabulary gap" phenomenon: are isolated between any two words. Light from these two vectors was not clear whether the two words have a relationship, even a microphone and synonyms so Mike could not survive. Deep Learning generally used the word vector is not mentioned school shooting by One-hot Representation indicated that very long term vectors, but with Distributed school shooting Representation (I do not know how this should translate, because there is a called school shooting "Distributional Representation" representation, but also a low-dimensional real vector a different concept) representation. This vector school shooting is generally grow into something like this: [0.792, -0.177, -0.107, 0.109, -0.542, ...]. Dimensions 50 100 peacekeeping dimension more common. This vector representation is not unique, it will be mentioned at present calculated this vector school shooting mainstream methods. (I think) the largest contribution to the Distributed representation school shooting is to make related or similar words, at a distance closer. Distance school shooting vector can be used most traditional Euclidean distance to measure can also be used to measure the angle cos. Vector expressed in this way, "Mike" and "Microphone" will be far less than the distance "Mike" and "weather." Ideally may "Mike" and "Microphone" indicates you should be exactly school shooting the same, but because some people will the English name "Mike" is written as "Mike", leading to "Mike" word to bring some of the names of semantics, it will not and the "Microphone" exactly the same. 1. The word vector of origin Distributed representation school shooting Hinton was first in the 1986 essay "Learning school shooting distributed school shooting representations of concepts" proposed. Although the article did not say the words you want to do Distributed school shooting representation, (it does not make sense I guess even the article school shooting is to give BP network he had just proposed to advertise,) but at least this advanced thinking at that time in people heart planted the fire, after the year 2000 began to be valued. Distributed representation used to represent words, commonly school shooting referred to as "Word Representation" or "Word Embedding", Chinese commonly known as "the word vector." Really can only be called "known", not really translate. school shooting Six months ago I wanted to translate, but he just could not think of how Embedding should translate later on so called accustomed -_- ||| welcome if there is a good translation. (Update: @ South Zhihua given a proper translation in this micro-Bo: word embedded) Embedding meaning of the term may refer to the appropriate Wikipedia page (link). After all mentioned school shooting "word vector" mean the word vector with Distributed Representation representation. If the traditional sparse notation word in solving certain tasks when (such as building language models) will cause the curse of dimensionality [Bengio school shooting 2003]. school shooting The use of low-dimensional word vector no such problems. From a practical point of view at the same time, high-dimensional feature if you want to apply Deep Learning, its complexity is almost unacceptable, and therefore low-dimensional vector word here also suffering sought. At the same word vector distance mentioned in the previous section, similar words were similar, which allows some of the models based on the word vector design comes with a smooth function, so that the model looks very beautiful. 2. The word vector of the training is how to introduce the word vector is trained, you have to mention the language model. So far all the training methods I learned in training language model is at the same time, the way to get the word vectors. It is relatively easy to understand, from the period of non-natural text labels school shooting to learn some things, nothing more than word frequency school shooting statistics, the co-occurrence of the word, the word with such information. The statistics from the natural text and create a language model, is undoubtedly a task requires the most accurate (not rule out later it was more useful to create a better way). Since the task of constructing a language model requirements so high, which must also need to be more precise language of statistics and analysis, but also a need for better models, more data to support. Currently the best word vector comes from this, it is not hard to understand. Work presented here are from unlabeled data in plain text unsupervised learning the word vector (language model is based on this idea originally came), you can guess, if there is marked school shooting corpus spend training vector word The method will certainly be more. However, depending on the size of the current corpus, or the use of unlabeled corpus of some tricky way. Words most classic training vectors have three work, C & W 2008, M & H 2008, Mikolov 2010. Of course, talking about the work before, I have to tell us about this series Bengio classic. 2.0 Language model profile advertising school shooting inserts, brief language model, know can ignore this section. In fact, the language model is to see a word is not normal to say it. This stuff is useful, such as machine translation, speech recognition obtained after a number of candidates, the model can be used to pick the language as much as possible the results of a fly. In the other tasks in the NLP can also be used. Formal description language model is given a string to see if it is the natural language of probabilities $ P (w_1, w_2, ..., w_t) $. $ W_1 $ to $ w_t $ respectively for each word in this sentence. There is a very simple inference is: $ P (w_1, w_2, ..., w_t) = P (w_1) \ times P (w_2 | w_1) \ times P (w_3 | w_1, w_2) \ times ... \ times P (w_t | w_1, w_2, ..., w_ {t-1}) $ commonly used language school shooting models are seeking approximately $ P (w_t | w_1, w_2, ..., w_ {t-1}) $. Such as n-gram model is to use the $ P (w_t | w_ {t-n + 1}, ..., w_ {t-1}) $ approximated the former. Incidentally, since the sign behind the differences in each paper to introduce the use of too much, this Bowen Bengio 2003 attempt to consistent use of the symbol system (be slightly simplified), in order to make comparisons and analysis between the methods. Classic drawing school shooting 2.1 Bengio the lowermost $ w_ {t-n + 1}, ..., w_ {t-2}, w_ {t-1} $ is the former $ n-1 $ words. Now we need to predict the next word $ w_t $ According to this known $ n-1 $ words. $ C (w) $ denote word $ w $ corresponding word vector, the entire model used is a unique school shooting word vector, there exists a matrix $ C $ (a $ | V | \ times m $ matrix) in. Where $ | V | $ represents the size of vocabulary school shooting (corpus total number of words), $ m $ represents a dimension word vectors. $ W $ to $ C (w) $ conversion line is removed from the matrix. The first layer (input layer) network is $ C (w_ {t-n + 1}), ..., C (w_ {t-2}), C (w_ {t-1}) $ which $ n- 1 $ vectors put together end to end to form a $ (n-1) m $ dimensional vector, hereinafter referred to as the $ x $. The second layer (hidden layer) like an ordinary neural network, the network directly using the $ d + Hx $ calculated. $ D $ is a biased school shooting term. After this, use $ \ tanh $ as the activation function. The third layer (output layer) networks, a total of $ | V | $ nodes, each node $ y_i $ indicates a word for $ i $ not normalized log probability. Finally use softmax activation school shooting function output value $ y $ normalized to probabilities. Eventually, the formula $ y $ are: Formulas of $ U $ (a $ | V | \ times h $ matrix) is a hidden layer to the output layer parameters, most concentrated in the calculation school shooting of the whole model matrix multiplication school shooting $ U $ and the hidden layer. The three work later mentioned, there are aspects to this simplification, to enhance the speed calculation. There is also a formula matrix $ W $ ($ | V | \ times (n-1) m $), the matrix contains from the input layer to the output layer straight edge. Straight edge is from the input layer is directly converted into a linear output layer, it seems, a common neural network techniques (not carefully examine them). If you do not directly connected to the side, it will $ W $ is set to 0 on it. In the final experiment, Bengio find direct side effects, although not to upgrade the model, but can be less than half the number of iterations. He also suspect that if there is no direct connection side may be able to generate a better word vector. Now everything is ready, drop by stochastic gradient method to optimize the model out of it. It should be noted that the input layer of the neural network is generally only one input value, but here, the input layer $ x $ is the parameter ($ C $ in existence), also needs to be optimized. After the optimization, the word vector with the language model also. Language model thus obtained comes with a smooth, without the traditional n-gram model the complex smoothing algorithm. Bengio on APNews datasets do comparative experiments show that his model is better than the ordinary smoothing school shooting algorithm designed n-gram algorithm is better than 10% to 20%. Before the end of the presentation of the classic works of Daniel Bengio then plug some gossip. In the next period of its JMLR paper work, he put an energy function, the input vector and the output vector uniform consideration, and to minimize the energy function as the target for optimization. Later, M & H work is undertaken on this basis. He referred to the polysemy to be resolved, nine years after Huang proposed a solution. He also paper casually (not written in Future Work) is mentioned: You can use a number of methods to reduce the number of parameters, such as using recurrent neural network. Later on along this direction Mikolov published a lot of papers, until graduate school. Daniel is Daniel. 2.2 C & W of SENNA Ronan Collobert and Jason Weston ICML in the 2008 publication of "A Unified Architecture school shooting for Natural Language Processing: Deep Neural Networks with Multitask Learning" which first introduced the calculation method of the word vector of their questions. Niu and previous school shooting similar, if we look at it, they should go to see put into the paper JMLR in 2011 "Natural Language Processing (Almost) from Scratch". This paper summarizes a number school shooting of their work, very systematic. This JMLR thesis topic is also very domineering ah: from the beginning to engage in NLP. They also wrote a paper system open source, called SENNA (Home Link), more than 3500 lines of pure C code that is written in a very clear. I was relying on this code is only slowly school shooting read this paper. Unfortunately, the code is only part of the test, there is no training component. C & W is not really the main purpose of this paper is to generate a good word vector, and even do not want to train the language model, but to use this word vector to complete NLP inside the various tasks, such as speech tagging, named entity recognition, phrase recognition, semantic role labeling, and so on. Due to the different, C & W's word vector school shooting training methods aim in my opinion is the most special. They do not seek to be approximately $ P (w_t | w_1, w_2, ..., w_ {t-1}) $, but directly to try to approximate $ P (w_1, w_2, ..., w_t) $. In practice, they did not go to seek the probability of a string, but seek continuous $ n $ window word scoring $ f (w_ {t-n + 1}, ..., w_ {t-1}, w_t ) $. Rate $ f $ illustrate this statement more higher school shooting then normal; low scores illustrate this sentence is not too reasonable; school shooting if it is random to accumulate in a few words
Recommended Bowen Book Excerpt: Neural Linguistic Programming ai (NLP) NLP (Neuro-Linguistic Programming) is widely used in personal growth and development, marketing, business management and commercial negotiations, athlete training and other areas in the world, attracting millions of learners. Some of the Neuro-Linguistic Programming translated neurolinguistic, I think liable to cause misunderstanding. Because, neurolinguistic mostly Neurolinguistics of translation, mainly the study of language acquisition, language acquisition, speech production, speech ai understanding neural ai mechanisms, the study of how the human brain receiving, storing, processing and extracting speech information, research neurophysiological ai mechanisms of normal speech and speech disorders neuropathological mechanisms. Here, we call NLP emphasizes management in accordance with certain procedures emotions. ai Self-awareness, self-growth and self-worth; Induction is to summarize our experience, concluded in summary form or explanation. We are a stimulus, through an accident, always unconsciously recall a similar experience, and leveraging previous experience impression, memory for the next event now conclusion that the experience of the past as the present judgment, reasoning ai in accordance ai with. For example, you need a Taiwanese businessmen to discuss cooperation matters. When you come in contact with him, a relationship with the people of Taiwan before leaving your impression of the novel and the movie depicted the image of the people of Taiwan, will now affect your evaluation of the Taiwan ai Businessmen. Work, we suffer internal and external aspects of the stimulus, can not always pay attention, pay attention to everything. We can only be selectively directed or focused on certain events, pay attention to certain objects, that is, to ignore a lot of excitement and things. Because we have removed a lot of information, it can be noted that some other information. For example, when you looked up at the stars, it is possible to turn a deaf ear to the nearby music; ai when you concentrate on thinking, it may not be talking with my colleagues attention to the next. Remove can reduce the psychological burden, but sometimes may overlook some important information. Delete occurs automatically in our subconscious, we see the need to help myself information. Two years after graduating from university to meet the students, one in Shanghai and one in Beijing. Beijing students to work in a prestigious university as a teacher, a month income ai of 4,000 yuan, the Shanghai students to work in a business where income of 20,000 yuan a month, students ai in Shanghai to work for students working in Beijing said sympathetically ai : "how do you earn so little money a month?" January income of 4,000 yuan students are very angry and feel pride was hurt. Students ai earn 20,000 yuan a month did not mean to be so sad the old school, but the result ai was caused by an old classmate of anger and resentment. The reason is to earn 20,000 yuan a month the students not only from the love of money, intellectual, and psychological care and other characteristics of the starting face to its exchanges. In addition, NLP concept are: appearance than reality; with a reactive measure the exchange effect; there is no failure, only results; manifested behavior is usually to get the best effect of behavior; only the result against inflexible exchange; people The information was revealed by the behavior; and there is choice is better than no choice; actions to increase choice; people who have the most choices, it has elastic best thinking and behavior, it will have the largest in any interactive process influence; we always have all the necessary resources or we can create them; we deal with all the information through ai facial features; imitate successful performance, can lead to excellence, excellence is possible to copy; the intention of all human behavior is positive ...... Book Excerpt: ABC Theory and Applications (2014-12-14 22:15:47) Book Excerpt: rational cognition and irrational cognition (2014-12-09 08:28:37) Book Excerpt: Defence Mechanisms (2014 -12-03 17:30:58) Book Excerpt: Attribution Theory on staff management (2014-11-27 23:02:11) Book Excerpt: Attribution therapy (2014-11-25 01:15:42 ) Book Excerpt: attribution and emotional relationship (2014-11-21 14:47:03) Book Excerpt: three main attribution theory (2014-11-19 ai 10:53:33) Book Excerpt: Attribution Theory ai (2014-11-15 00:42:40) Book Excerpt: Cognitive coordination theory and its manifestations (2014-11-13 08:37:46) Book Excerpt: principles of self-identity ai and social identity (2014-11-0911 : 41: 05) Send comments
Algorithm design-related courses at the university on, surely we have learned to use dynamic programming algorithm for solving the minimum edit distance, formally defined as follows: Minimum edit distance is usually calculated as a similarity function is used for a variety of practical applications, detailed as follows: (in particular, for the Chinese natural language processing, the general term for the basic processing unit) spelling ta correction (Spell Correction): they Spell checking (Spell Checker), will compare each word in the dictionary entries, the English word stemming etc. often need to do standardization process, if a word does not exist in the dictionary, it is considered an error, ta and then try to prompt N most likely to enter the word - spelling suggestions. Tips words commonly used method is to list the original word dictionary with the smallest edit distance entry. Sure some people here have questions: (if length len) are calculating the minimum edit distance and dictionary entries for each word not in the dictionary, the time complexity is not too high? Indeed, it is generally required to add some pruning strategy, such as: because ta the general spell check applications only need to give the correct recommendations Top-N to (N usually take 10), then we can from the dictionary in accordance with the order of length len, len -1, len + 1, len-2, len-3, ... comparative terms; spelling suggestions terms defined minimum edit distance to this term can not exceed a certain threshold; if the minimum edit distance as a candidate 1 After the entry of more than N, the processing is terminated; cache common misspellings and recommendations to improve performance. DNA Analysis: A major theme of genomics is to compare the DNA sequence and try to find common portions of the two sequences. If the two DNA sequences have similar common subsequence, then these two sequences are likely to be homologous. When the two sequences, only to consider the exact match for the character, but also to consider a sequence of spaces or gaps (or, conversely, to consider another sequence insertion portion) and do not match, these two aspects It could mean mutation (mutation). In a sequence alignment, it is necessary to find the optimal alignment (optimal alignment generally refers to maximize the number of matches, will not match the number of spaces and minimized). If you want more formal, you can determine a score, add points to match the character, minus the fraction of the space and mismatched ta characters. Specifically, the candidate can be a text string with the name of each entity in the dictionary to edit distance calculations, ta found that when editing text in a string distance is smaller than a given threshold, the candidate word as an entity name; acquiring entity After the name of the candidate word, according to the context in which the use of heuristic rules or classification method determines whether the candidate words indeed for the entity name. Coreferential entity (Entity Coreference): By calculating any minimum edit distance between the two entities to determine whether there were common refers to the relationship? Such as "IBM" and "IBM ta Inc.", "Stanford President John Hennessy" and "Stanford University President John Hennessy". MT (Machine Translation): identification of parallel pages: Parallelism pages generally have the same or similar interface structure, parallel structure on the HTML page should have a great degree of approximation. First page of HTML tags extracted, into a string, and then examine the degree of similarity of two strings ta with minimal editing distance. In practice, this policy generally and document length ratio, sentence alignment translation models and other methods used in combination to identify the ultimate parallel pages right. Auto Review: first storage Machine translation source text and multiple reference translations ta of different quality levels, when evaluating the translation automatic translation corresponds to its smallest edit distance reference ta translation, indirect estimation quality automatic translation, as shown below: dylinshi126 Categories All blog articles (1600) Foreign (1) Http Web (17) Java (177) Operating Systems (2) algorithm (27) Computer (45) program (2) Performance ta (50) php (45) Test (12) Server (14) Linux (42) database (14) Management (9) network (3) Architecture (81) Safety (2) Data Mining (16) Analysis (9) Data structure (2) the Internet (6) Network Security (1) frame (9) Videos (2) computer, SEO (3) search engine (32) SEO (18) UML (1) tools (2) Maven (41) Other (7) object-oriented (5) Reflective (1) Design mode (6) in-memory database (2) NoSql (9) Cache (7) shell (9) IQ (1) Open (1) Js (23) HttpClient (2) excel (1) Spring (7) Debug (4) mysql (18) Ajax (3) JQuery (9) Comet (1) English (1) C # (1) HTML5 (3) Socket (2) Health (1) Principle ta (2) inverted index (4) Massive Data Processing (1 ) C (2) Git (59) SQL (3) LAMP (1) optimization (2) Mongodb (20) JMS (1) Json (15) Location (2) Google Maps (1) memcached (10) pressure measurement (4 ) php. Performance ta Optimization (1) inspirational (1) Python (7) sort (3) mathematics (3) voting algorithm (2) study (1) Cross-site attacks (1) front-end (8) SuperFish (1) CSS (2 ) Comments mining analysis (1) Google (13) Image analysis (1) Maps (1) Gzip (1) compression (1) Crawler (14) traffic statistics (1) acquisition (1) log analysis (2) Browser compatibility (1) image search engine technology (2) Space (1) User Experience (7) Free space (1) Social (2) Image Processing (2) front-end tool (1) Business (3) Search within ta Taobao (3) Station (1 ) Favorite Sites (1) theory (1) Data Warehouse (2) Ethereal (1) Hadoop (109) big data (6) Lucene (35) Solr (31) Drupal (1) Cluster (2) Lu (2) Mac (4) Index (9) Session Sharing (1) sorl (10) JVM (9) encoding (1) taobao (14) TCP / IP (4) You may be interested (3) Jokes (7), server consolidation ( 1) Nginx (9) SorlCloud (4) distributed search (1) ElasticSearch (30) Network Security (1) MapReduce (8) similarity (1) mathematics (1) Session (3) Dependency Injection (11) Nutch (10 ) Cloud computing (6) Virtualization ta (3) Financial Freedom (1) Open Source (23) Guice (1) Recommended system (2) artificial intelligence (1) Environment (2) Ucenter (1) Memcached-session-manager (1) Storm (54) wine (1) Ubuntu (23) Hbase (44) Google App Engine (1) SMS (2) matrix (1) MetaQ (34) GitHub & Git & private / public libraries (8) Zookeeper (28) Exception ( 24) Business (1) drcp (1) encryption & decryption (1) automatic code generation (1) rapid-framework (1) secondary development (1) Facebook ta (3) EhCache (1) OceanBase (1) Netlog (1) a large amount of data (2) Distributed (3) things (2) Services (2) JPA (2) Communication (1) math (1) Setting.xml (3) Network Drive (1) Mount (1) Agent (0 ) Japanese の (1) peanut shell (7) Windows (1) AWS (2) RPC (11) jar (2) finance (1) MongDB (2) Cygwin (1) Distribute (1) Cache (1) Gora ( 1) Spark (30) Memory Computing (1) Pig (2) Hive (22) Mahout (20) Machine Learning (44) Sqoop (1) ssh (1) Jstack (2) Business ta (1) MapReduce.Hadoop (1) monitor (1) Vi (1) high concurrency (6) massive amounts of data (2) Yslow (4) Slf4j (1) Log4j (1) Unix (3) twitter (2) yotube (0) Map-Reduce (2) Streaming ( 1) VMware (1) Things (1) YUI (1) LazyLoad ta (1) RocketMQ ta (17) WiKi (1) MQ (1) RabbitMQ (2) kafka (3) SSO (8) single sign-on (2) Hash (4) Redis (20) Memcache (2) Jmeter (1) Tsung (1) ZeroMQ (1) Communication (7) open source log analysis (1) HDFS (1) zero-copy (1) Zero Copy (1) Weka ( 1) I / O (1) NIO (13) Locks (3) Entrepreneurship (11) thread pool (1) Investment ta (3) pooling technology (4) collection (1) Mina (1) JSMVC (1) Powerdesigner (1 ) thrift (6) performance, architecture (0) Web (3) Enum (1) Spring MVC (15) interceptor (1) Web front-end ta (1) Multi-threaded (1) Jetty (1) emacs (1) Cookie (2 ) Tools (1) Distributed message queue (1) Project management (2) github (21) network disk (1) Warehouse (3) Dropbox (2) Tsar (1) Monitoring (3) Argo (2) Atmosphere (1) WebSocket (5) Node.js (6) Kraken (1) Cassandra (3) Voldemort (1) VoltDB (2) Netflix (2) Hystrix (1) psychology (1) user analysis (1) User Behavior Analysis (1) JFinal (1) J2EE (1) Lua (2) Velocity (1) Tomcat (3) Load Balancing (1) Rest (2) SerfJ (1) Rest.li (1) KrakenJS (1) Web framework (1) Jsp (2 ) Layout (2) NowJs (1) WebSoket (1) MRUnit (1) CouchDB (1) Hiibari (1) Tiger (1) Ebot (1) Distributed ta reptiles (1) Sphinx (1) Luke (1) Solandra (1 ) search engines (1) mysqlcft (1) IndexTank (1) Erlang (1) BeansDB (3) Bitcask (2) Riak (2) Bitbucket (4) Bitbuket (1) Tokyo Cabinet (2) TokyoCabinet (2) Tokyokyrant ( 1) Tokyo Tyrant (1) Memcached protocol (1) Jcrop (1) Thead (1) detailed ta design (1) Q (2) ROM (1) Calculation (1) epoll (2) libevent (1) BTrace (3) cpu (2) mem (1) Java template engine (1) funny (1) Htools (1) linu (1) node (3) Hosting (1) closure (1) Thread (1) blocks (1) LMAX (2 ) Jdon (1) optimistic locking (1) Disruptor (9) Concurrent (6) share (1) volatile (1) False Sharing (1) Ringbuffer (5) i18n (2) rsync (1) Deployment (1) stress test (1) ORM (2) N + 1 (1) Http (1) web development Scaffolding (1) Mybatis (1) international (2) Spring data (1) R (4) web crawler (1) bar (1) scaling, etc. (1) java, facing interface (1) Programming Specification (1)
Do not praise the pale "hottest resignation letter" Instructor: Yiming Shao
2015 Lin Wencai: "Professional Certificate in Satir Model" tnpsc (2014-12-19 21:01:13) 2015 Lin Wencai: "Satir Gender Relations Workshop" (2014-12-19 20:58: 42) 2015 "Satir parent-child relationship" course, accept applications now! (2014-12-19 20:56:56) face to face in the future nirvana - "soft training techniques," tnpsc January 16-18, opening the (2014-12-19 16:42:16) Li Zhongying talk entrepreneurs mind mode and marital Public class come to an end! (2014-12-19 16:35:32) Send comments