Search Results

Search found 4398 results on 176 pages for 'photo matching'.

Page 36/176 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Compiler is able to find function without matching .h file is updated?

    - by Maxim Veksler
    Hello Friends, I'm writing a C University project and stumbled upon a compiler behavior which I don't understand. In this file http://code.google.com/p/openu-bsc-maximveksler/source/browse/trunk/20465/semester/tasks/maman14/alpha/maman14/assembler/phaseOne.c?r=112 I've added a call to function named freeAsmInstruction(). This function is defined in file named lineParser.c, yet I haven't updated the matching lineParser.h header file to include this function declaration. Why does this code compile? I would expect that gcc would fail to compile phaseOne.c until the correct lineParser.h is updated with the declaration of freeAsmInstruction(). I would appreciate an explanation. Thank you, Maxim

    Read the article

  • How to highlight matching sub-strings inside a ListBox?

    - by Kishore Kumar
    I have one TextBox and one listbox for searching a collection of data. While searching a text inside a Listbox if that matching string is found anywhere in the list it should show in Green color with Bold. eg. I have string collection like "Dependency Property, Custom Property, Normal Property". If I type in the Search Text box "prop" all the Three with "prop" (only the word Prop) should be in Bold and its color should be in green. Any idea how it can be done?. Data inside listbox is represented using DataTemplate.

    Read the article

  • Help with manipulating child/parent elements with jQuery

    - by Mohammad
    Currently I have this JQuery script var img = $(this); img.wrap('<div class="photo"></div>'); img.parent().append("<p>" + img.attr("alt") + "</p>"); which successfully turns the following: <img src="photo.jpg" alt="caption"> into this <div class="photo"> <img src="photos.jpg" alt="caption"/> <p>caption</p> </div> All is well unless the image has a link as a parent; <a href="#"><img since I want it to be correct html (I'm fussy) improving the resulting outcome bellow would be satisfactory <a href="#"> <div class="photo"> <img src="photos.jpg" alt="caption"/> <p>caption</p> </div> </a> to this: <div class="photo"> <a href="#"> <img src="photos.jpg" alt="caption"/> </a> <p><a href="#">caption</a></p> </div> </a> This is my jQuery script so far (which doesn't work bc I'm a noob) aha if(img.parent('a')){ var targetLink = img.parent('a').attr('href').val(); img.parent('a').wrap('<div class="photo"></div>'); img.parent('div').append('<p><a href="'+targetLink+'">' + img.attr("alt") + '</p></a>'); }else{ img.wrap('<div class="photo"></div>'); img.parent().append("<p>" + img.attr("alt") + "</p>"); }; Any advice, or help will be greatly appreciated : ) Thank You! Update *Answer* (works but looks sloppy) I'm editing it atm; if(img.parent('a').length){ var targetLink = img.parent().attr('href'); img.parent().wrap('<div class="photo"></div>'); var divTarget = img.parent('a').parent('div'); divTarget.append('<p><a href="'+targetLink+'">' + img.attr("alt") + '</p></a>'); }else{ img.wrap('<div class="photo"></div>'); img.parent().append("<p>" + img.attr("alt") + "</p>"); };

    Read the article

  • JS / JQuery character problem

    - by Jimmy Farax
    I have a code which has a character that JS is not handling properly. $(document).ready(function(){ $.getJSON("http://sjama.tumblr.com/api/read/json?callback=?", function (data){ for (var i=0; i<data.posts.length; i++){ var blog = data.posts[i]; var id = blog.id; var type = blog.type; var photo = blog.photo-url-250; if (type == "photo"){ $("#blog_holder").append('<div class="blog_item_holder"><div class="blog_item_top"><img src='+photo+'/></div><div class="blog_item_bottom">caption will go here</div></div>'); } } }); <!-- end $.getJSON }); The problem is with this line: var photo = blog.photo-url-250; after "blog." it reads the "url" part weirdly because of the dash (-) in between. What can I do to sort this problem out?

    Read the article

  • scala 2.8.0.RC2 compiler problem on pattern matching statement?

    - by gruenewa
    Why does the following module not compile on Scala 2.8.RC[1,2]? object Test { import util.matching.Regex._ val pVoid = """\s*void\s*""".r val pVoidPtr = """\s*(const\s+)?void\s*\*\s*""".r val pCharPtr = """\s*(const\s+)GLchar\s*\*\s*""".r val pIntPtr = """\s*(const\s+)?GLint\s*\*\s*""".r val pUintPtr = """\s*(const\s+)?GLuint\s*\*\s*""".r val pFloatPtr = """\s*(const\s+)?GLfloat\s*\*\s*""".r val pDoublePtr = """\s*(const\s+)?GLdouble\s*\*\s*""".r val pShortPtr = """\s*(const\s+)?GLshort\s*\*\s*""".r val pUshortPtr = """\s*(const\s+)?GLushort\s*\*\s*""".r val pInt64Ptr = """\s*(const\s+)?GLint64\s*\*\s*""".r val pUint64Ptr = """\s*(const\s+)?GLuint64\s*\*\s*""".r def mapType(t: String): String = t.trim match { case pVoid() => "Unit" case pVoidPtr() => "ByteBuffer" case pCharPtr() => "CharBuffer" case pIntPtr() | pUintPtr() => "IntBuffer" case pFloatPtr() => "FloatBuffer" case pShortPtr() | pUshortPtr() => "ShortBuffer" case pDoublePtr() => "DoubleBuffer" case pInt64Ptr() | pUint64Ptr() => "LongBuffer" case x => x } }

    Read the article

  • Parse xml file with same tag multiple times iphone sdk

    - by neha
    Hi all, In my application, I have a tag multiple times. I'm using xml parser. I'm taking a corresponding element with similar name as the one in xml file in my class. So in case of: <photo>abc</photo> <photo>def</photo> What I get in photo element of my class is the second element i.e def, as the first one gets overwritten as there's only one photo element in my class. My question is am I wrong in taking similar elements in class as in case of xml? Is there any better method or a better parser? Or I'm on right path and have to do this manually by setting some flags etc? Thanx in advance.

    Read the article

  • How to store and retrieve file in mongoengine?

    - by Seiverence
    I am attempting to store and retrieve file within mongodb, but am having issues with the retrieval. class Animal(Document): genus = StringField() family = StringField() photo = FileField() def get_file(): marmot = Animal.objects(genus='Marmota').first() photo = marmot.photo.read() content_type = marmot.photo.content_type print marmot.family # Prints out "Sciuridae" print content_type # Gives me an error, as content_type is "None" def save_file(): marmot = Animal( genus='Marmota', family='Sciuridae') marmot_photo = open('marmot.jpg', 'r') marmot.photo = marmot_photo marmot.photo.content_type = 'image/jpeg' marmot.save() When I check mongodb, it appears the document does save after the save_file, but when I call get_file, it appears the content_type is "None"? Am I saving and retrieving the file correctly? If not, whats wrong with the code? NOTE: The issue appears only to occur in the Windows environment. When run on linux, it works fine... very confused.

    Read the article

  • How do I select the shallowest matching elements with XPath?

    - by harpo
    Let's say I have this document. <a:Root> <a:A> <title><a:B/></title> <a:C> <item><a:D/></item> </a:C> </a:A> </a:Root> And I have an XmlNode set to the <a:A> element. If I say A.SelectNodes( "//a:*", namespaceManager ) I get B, C, and D. But I don't want D because it's nested in another "a:" element. If I say A.SelectNodes( "//a:*[not(ancestor::a:*)]", namespaceManager ) of course, I get nothing, since both A and its parent are in the "a" namespace. How can I select just B and C, that is, the shallowest children matching the namespace? Thanks. Note, this is XPath 1.0 (.NET 2), so I can't use in-scope-prefixes (which it appears would help).

    Read the article

  • Shall this Regex do what I expect from it, that is, matching against "A1:B10,C3,D4:E1000"?

    - by Will Marcouiller
    I'm currently writing a library where I wish to allow the user to be able to specify spreadsheet cell(s) under four possible alternatives: A single cell: "A1"; Multiple contiguous cells: "A1:B10" Multiple separate cells: "A1,B6,I60,AA2" A mix of 2 and 3: "B2:B12,C13:C18,D4,E11000" Then, to validate whether the input respects these formats, I intended to use a regular expression to match against. I have consulted this article on Wikipedia: Regular Expression (Wikipedia) And I also found this related SO question: regex matching alpha character followed by 4 alphanumerics. Based on the information provided within the above-linked articles, I would try with this Regex: Default Readonly Property Cells(ByVal cellsAddresses As String) As ReadOnlyDictionary(Of String, ICell) Get Dim validAddresses As Regex = New Regex("A-Za-z0-9:,A-Za-z0-9") If (Not validAddresses.IsMatch(cellsAddresses)) then _ Throw New FormatException("cellsAddresses") // Proceed with getting the cells from the Interop here... End Get End Property Questions 1. Is my regular expression correct? If not, please help me understand what expression I could use. 2. What exception is more likely to be the more meaningful between a FormatException and an InvalidExpressionException? I hesitate here, since it is related to the format under which the property expect the cells to be input, aside, I'm using an (regular) expression to match against. Thank you kindly for your help and support! =)

    Read the article

  • PHP - Pull out array keys that have matching values?

    - by MAZUMA
    Is there a way to look inside an array and pull out the keys that have keys with matching values? Questions like this have been asked, but I'm not finding anything conclusive. So if my array looks like this Array ( [0] => Array ( [title] => Title 1 [type] => [message] => ) [1] => Array ( [title] => Title 2 [type] => [message] => ) [2] => Array ( [title] => Title 3 [type] => [message] => ) [3] => Array ( [title] => Title 2 [type] => Limited [message] => 39 ) [4] => Array ( [title] => Title 4 [type] => Offline [message] => 41 ) [5] => Array ( [title] => Title 5 [type] => [message] => ) And I want to get this Array ( [1] => Array ( [title] => Title 2 [type] => [message] => ) [3] => Array ( [title] => Title 2 [type] => Limited [message] => 39 ) )

    Read the article

  • SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log

    - by pinaldave
    One of the biggest reason I go to SQLPASS is that my friends are going there too. There are so many friends with whom I often talk on Facebook and Twitter but I rarely get time to meet them as well talk with them. One thing I am usually sure that many fo them will be for sure attend SQLPASS. This is one event which every SQL Server Enthusiast should attend. Just like everybody I had pleasant time to meet many of my SQL friends. There were so many friends that I met and I did not click photo. There were so many friends who clicked photo in their camera and I do not have them. Here are 1% of the photos which I have. If you are not in the photo, it does not mean I have less respect to our friendship. Please post link to our photo together :) I was very fortunate that I was able to snap a quick photograph with Pinal Dave with Dr. David DeWitt. I stood outside of the hall waiting for Dr. to show up and when he was heading down from convention center I requested him if I can have one photo for my memory lane and very politely he agreed to have one. It indeed made my day! Pinal Dave with Dr. David DeWitt Every single time I met Steve, I make sure I have one photo for my memory. Steve is so kind every single time. If you know SQL and do not know Steve Jones, you do not know SQL (IMHO). Following is the photograph with Michael McLean. More details about this photo in future blog post! Pinal Dave, Michael McLean, and Rick Morelan Arnie always shares his wisdom with me. I still remember when I very first time visited USA, I was standing alone in corner and Arnie walked to me and introduced to every single person he know. Talking to Arnie is always pleasure and inspiring. Arnie Rowland and Pinal Dave I am now published author and have written two books so far. I am fortunate to have Rick Morelan as Co-author of both of my books. He is great guy and very easy to become friends with. I am very much impressed by him and his kindness during book co-authoring. Here is very first of our photograph together at SQLPASS. Rick Morelan and Pinal Dave Diego Nogare and I have been talking for long time on twitter and on various social media channels. I finally got chance to meet my friend from Brazil. It was excellent experience to meet a friend whom one wants to meet for long time and had never got chance earlier. Buck Woody – who does not know Buck. He is funny, kind and most important friends of every one. Buck is so kind that he does not hesitate to approach people even though he is famous and most known in community. Every time I meet him I learn something. He is always smiling and approachable. Pinal Dave and Buck Woddy Rushabh Mehta is current SQL PASS president and personal friend. He has always smiling face and tremendous love for SQL community. I often wonder where he gets all the time for all the time and efforts he puts in for community. I never miss a chance to meet and greet him. Even though he is renowned SQL Guru and extremely busy person – every single time I meet him he always asks me – “How is Nupur and Shaivi?” He even remembers my wife and daughters name. I am touched. Rushabh Mehta and Pinal Dave Nigel Sammy has extremely well sense of humor and passion from community. We have excellent synergy while we are together. The attached photo is taken while I was talking to him on Seattle Shoreline about SQL. Pinal Dave and Nigel Sammy Rick Morelan wanted my this trip to be memorable. I am vegetarian and I told him that I do not like Seafood. Well, to prove the point, he took me to fantastic Seafood restaurant in Seattle and treated me with mouth watering vegetarian dishes. I think when I go to Seattle next time, I am going to make him to take me again to the same place. Rick, Rushabh, Pinal and Paras Well, this is a short summary of few of the friends I met at Seattle. What is the life without friends, eh? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • nginx: How can I set proxy_* directives only for matching URIs?

    - by Artem Russakovskii
    I've been at this for hours and I can't figure out a clean solution. Basically, I have an nginx proxy setup, which works really well, but I'd like to handle a few urls more manually. Specifically, there are 2-3 locations for which I'd like to set proxy_ignore_headers to Set-Cookie to force nginx to cache them (nginx doesn't cache responses with Set-Cookie as per http://wiki.nginx.org/HttpProxyModule#proxy_ignore_headers). So for these locations, all I'd like to do is set proxy_ignore_headers Set-Cookie; I've tried everything I could think of outside of setting up and duplicating every config value, but nothing works. I tried: Nesting location directives, hoping the inner location which matches on my files would just set this value and inherit the rest, but that wasn't the case - it seemed to ignore anything set in the outer location, most notably proxy_pass and I end up with a 404). Specifying the proxy_cache_valid directive in an if block that matches on $request_uri, but nginx complains that it's not allowed ("proxy_cache_valid" directive is not allowed here). Specifying a variable equal to "Set-Cookie" in an if block, and then trying to set proxy_cache_valid to that variable later, but nginx isn't allowing variables for this case and throws up. It should be so simple - modifying/appending a single directive for some requests, and yet I haven't been able to make nginx do that. What am I missing here? Is there at least a way to wrap common directives in a reusable block and have multiple location blocks refer to it, after adding their own unique bits? Thank you. Just for reference, the main location / block is included below, together with my failed proxy_ignore_headers directive for a specific URI. location / { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($http_user_agent ~* '(iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry|webos|s8000|bada)') { set $mobile_request '1'; set $no_cache "1"; } # feed crawlers, don't want these to get stuck with a cached version, especially if it caches a 302 back to themselves (infinite loop) if ($http_user_agent ~* '(FeedBurner|FeedValidator|MediafedMetrics)') { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=17; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set, these are absolutely critical for Wordpress installations that don't use JS comments if ($http_cookie ~* "(_mcnc|comment_author_|wordpress_(?!test_cookie)|wp-postpass_)") { set $no_cache "1"; } if ($request_uri ~* wpsf-(img|js)\.php) { proxy_ignore_headers Set-Cookie; } # Bypass cache if flag is set proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; # under no circumstances should there ever be a retry of a POST request, or any other request for that matter proxy_next_upstream off; proxy_read_timeout 86400s; # Point nginx to the real app/web server proxy_pass http://localhost; # Set cache zone proxy_cache microcache; # Set cache key to include identifying components proxy_cache_key $scheme$host$request_method$request_uri$mobile_request; # Only cache valid HTTP 200 responses for this long proxy_cache_valid 200 15s; #proxy_cache_min_uses 3; # Serve from cache if currently refreshing proxy_cache_use_stale updating timeout; # Send appropriate headers through proxy_set_header Host $host; # no need for this proxy_set_header X-Real-IP $remote_addr; # no need for this proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Set files larger than 1M to stream rather than cache proxy_max_temp_file_size 1M; access_log /var/log/nginx/androidpolice-microcache.log custom; }

    Read the article

  • Nginx: Can I cache a URL matching a pattern at a different URL?

    - by Josh French
    I have a site with some URLs that look like this: /prefix/ID, where /prefix is static and ID is unique. Using Nginx as a reverse proxy, I'd like to cache these pages at the /ID portion only, omitting the prefix. Can I configure Nginx so that a request for the original URL is cached at the shortened URL? I tried this (I'm omitting some irrelevant parts) but obviously it's not the correct solution: http { map $request_uri $page_id { default $request_uri; ~^/prefix/(?<id>.+)$ $id; } location / { proxy_cache_key $page_id } }

    Read the article

  • wrestling with some rewrites for Apache...

    - by Evert
    Hi all, I have upgraded my gallery2 to gallery3, and notice that some links no longer resolve correctly. Since the proper way is to redirect these with a 301, that is the way I'm going. The following series need redirecting: folders: old: http://photo.meulie.net/v/various/Gry/ new: http://photo.meulie.net/various/Gry/ pages: old: http://photo.meulie.net/v/Jacob/02112008310.jpg.html new: http://photo.meulie.net/Jacob/02112008310 (both are of course just examples. there are 100's of folders & files to redirect...) I think there were/are also direct links to images, but those I'm not bothering with for now. Who can help me out? :-) Regards, Evert

    Read the article

  • usb hub not working on resume from suspend

    - by user1781498
    All the usb ports on my laptop work but when I resume from suspend some the usb ports don't work. lsusb Before Suspend: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f3:014b Elan Microelectronics Corp. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 04f2:b3a6 Chicony Electronics Co., Ltd Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb After Suspend: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f3:014b Elan Microelectronics Corp. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • RPi and Java Embedded GPIO: Sensor Connections for Java Enabled Interface

    - by hinkmond
    Now we're ready to connect the hardware needed to make a static electricity sensor for the Raspberry Pi and use Java code to access it through a GPIO port. First, very carefully bend the NTE312 (or MPF-102) transistor "gate" pin (see the diagram on the back of the package or refer to the pin diagram on the Web). You can see it in the inset photo on the bottom left corner. I bent the leftmost pin of the NTE312 transistor as I held the flat part toward me. That is going to be your antenna. So, connect one of the jumper wires to the bent pin. I used the dark green jumper wire (looks almost black; coiled at the bottom) in the photo. Then push the other 2 pins of the transistor into your breadboard. Connect one of the pins to Pin # 1 (3.3V) on the GPIO header of your RPi. See the diagram if you need to glance back at it. In the photo, that's the orange jumper wire. And connect the final unconnected transistor pin to Pin # 22 (GPIO25) on the RPi header. That's the blue jumper wire in my photo. For reference, connect the LED anode (long pin on a common anode LED/short pin on a common cathode LED, check your LED pin diagram) to the same breadboard hole that is connecting to Pin # 22 (same row of holes where the blue wire is connected), and connect the other pin of the LED to GROUND (row of holes that connect to the black wire in the photo). Test by blowing up a balloon, rubbing it on your hair (or your co-worker's hair, if you are hair-challenged) to statically charge it, and bringing it near your antenna (green wire in the photo). The LED should light up when it's near and go off when you pull it away. If you need more static charge, find a co-worker with really long hair, or rub the balloon on a piece of silk (which is just as good but not as fun). Next blog post is where we do some Java coding to access this sensor on your RPi. Finally, back to software! Ha! Hinkmond

    Read the article

  • Can I make Apache drop a connection when matching a URL?

    - by PP
    Using mod_rewrite I can construct a rule to respond with a clean error code (e.g. 404 not found, 410 gone, or 403 unauthorised) when a page is requested that I don't want to serve. But frequently I get completely erroneous requests from hackers scanning my website for vulnerabilities or possibly cross-site scripting attempts. For these customers I do not want to return a clean error - I'd rather do something else like immediately drop the connection with no response or, alternatively, hold the connection open for a lengthy period of time to frustrate the automated process. Any ideas how to accomplish this with Apache? I've read that nginx has the ability to immediately terminate a connection when a particular pattern is matched.

    Read the article

  • Is a matching entry in /etc/hosts required for hostname?

    - by JohnWoltman
    I was installing a Tomcat webapp that refused to work until I stumbled on someone else's issue with an unrelated product. The solution was to add the machine's name to /etc/hosts, to match the name returned by hostname. Is this required for general Linux networking to function correctly? My webapp is running in a virtual machine so that I can test the webapp, and I don't normally bother with the /etc/hosts file on VMs. I just shook my fist and cursed Tomcat and webapp's behavior. I read http://serverfault.com/questions/118823, but that doesn't say if it's required or not.

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • Why is this MySQL FULLTEXT query returning 0 rows when matching rows are present?

    - by Don MacAskill
    I have a MySQL table with 200M rows which has a FULLTEXT index on two columns (Title,Body). When I do a simple FULLTEXT query in the default NATURAL LANGUAGE mode for some popular results (they'd return 2M+ rows), I'm getting zero rows back: SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('fubar'); But when I do a FULLTEXT query in BOOLEAN mode, I can see the rows in question do exist (I get 2M+ back, depending): SELECT COUNT(*) FROM itemsearch WHERE MATCH (Title, Body) AGAINST ('+fubar' IN BOOLEAN MODE); I have some queries which return ~500K rows which are working fine in either mode, so if it's result size related, it seems to crop up somewhere between 500K and a little north of 2M. I've tried playing with the various buffer size variables, to no avail. It's clearly not the 50% threshold, since we're not getting 100M rows back for any result. Any ideas?

    Read the article

  • How to quickly start Programs like "regedit.exe" from the Windows 7 search bar using substring matching?

    - by Palmin
    The search bar in Windows 7 is very convenient to quickly start applications by pressing the Win-key and then entering the name of the application. For applications with a Program Menu entry like Firefox, it is sufficient to type Fire and Firefox will be displayed in the Programs section of the search results. For other applications like regedit.exe, I have to type the full command regedit before the correct choice regedit.exe appears. Is there any way to have regedit.exe appear already when I have just entered a substring? Please note: I have seen Add my applications to Vista’s Start Search, but I don't want to add anything to the Start Menu manually. This question is about if there is some configuration that can be tuned to make the results appear. I have also seen Search behavior of Windows 7 start menu, but my problem is not that the exe appears under Files, regedit.exe correctly appears under Programs, but it should appear already for a substring match.

    Read the article

  • What filesystem comes closest to matching NTFS for support of ACLs, and highly-granular permissioning?

    - by warren
    It seems that most other filesystems handle the basic *nix permissions (ugo±rwx), with maybe an addition here or there. Or can be "made" to handle ACLs through the use of other tools on top of the system. On the wikipedia pages about filesystems (http://en.wikipedia.org/wiki/List%5Fof%5Ffile%5Fsystems & http://en.wikipedia.org/wiki/Comparison%5Fof%5Ffile%5Fsystems), it appears that while some do support extended meta-data, none support natively the level of permissioning that NTFS does. Am I wrong in this understanding?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >