Search Results

Search found 600 results on 24 pages for 'variations'.

Page 2/24 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • FORMSOF Thesaurus in SQL Server

    - by Coolcoder
    Has anyone done any performance measures with this in terms of speed where there is a high number of substitutes for any given word. For instance, I want to use this to store common misspellings; expecting to have 4-10 variations of a word. <expansion> <sub>administration</sub> <sub>administraton</sub> <sub>aministraton</sub> </expansion> When you run a fulltext search, how does performance degrade with that number of variations? for instance, I assume it has to do a separate fulltext search performing an OR? Also, having say 20/30K entries in the Thesaurus xml file - does this impact performance?

    Read the article

  • SQL timetable for employee

    - by latinunit-net
    Hi guys, i need to create an employee shift database. so i have 3 tables so far, employee, employee_shift, and shift im suppose to calculate how many shifts an employee has done at the end of the month, my question means, because a month has 30 days some have 28 and 31 days. this means i need to create in the shift table 31 different variations? one for each day of the month? in order to calculate which employee has worked the most? in my business relation it says an employee has either 1 or 2 shifts per day therefore do i have to have 60 different rows of variations? im i right or is there an easy way to work it out

    Read the article

  • XML Schema Header & Namespace Config

    - by zharvey
    Migrating from DTD to XSD and for some reason the transition is a bumpy one. I understand how to define the schema once I'm inside the <xs:schema> root tag, but getting past the header & namespace declaration stuff is proving to be especially confusing for me. I have been trying to follow the well-laid out tutorial on W3S but even that tutorial seems to assume a lot of knowledge up front. I guess what I'm looking for is a King's English explanation of which attributes do what, where they go, and why: xmlns xmlns:xs xmlns:xsi targetNamespace xsi:schemaLocation And in some cases I see different variations of these elements/attributes, such as xsi which seems to have two different notations like xsi:schemaLocation="..." and xs:import schemaLocation="...". I guess between all these slight variations I can't seem to make heads or tails of what each of these does. Thanks in advance for bringing any clarity to this confusion!

    Read the article

  • calculate next line numbers

    - by osomanden
    Weird question I guess.. But I am not very math wiz - soo here goes.. I am trying to create a patterne (or variable patterns based on selection) based on x and y numbers (2 rows and 4 columns) and the direction of the counting of x numbers like: 1-2-3-4 5-6-7-8 That one is easy, when number of x-columns is reached, next line and continue x count. But with eg. this one (still 2 rows and 4 columns): 1-2-3-4 8-7-6-5 upsie.. what if it is eg. 3++ rows and still 4 columns? 1-2-3-4 8-7-6-5 9-10-11-12 what would be the formula for this - or other possible variations (teaser for variations): 9-10-11-12 8-7-6-5 1-2-3-4 or reversed

    Read the article

  • Android/ iOS how to determine small changes in distance using sensors?

    - by Tom
    I have been doing a bit of research, but I cannot seem to find a way to determine small distances (centimeters and meters) using the sensors in Android or iOS devices. Bluetooth appears too inaccurate and require more than one device, GPS only works over larger variations in distance, and small variations in rotation seem to make using the accelerometer nearly impossible. Is there a method that I am unaware of that would allow me to do such a thing? I am familiar with Calculus, so using Integrals to determine distance based on changes in time and velocity/ acceleration is not a problem for me, I just do not know how to determine those things. Thank you.

    Read the article

  • SQL University: Parallelism Week - Part 2, Query Processing

    - by Adam Machanic
    Welcome back for the second part of Parallelism Week here at SQL University . Get your pencils ready, and make sure to raise your hand if you have a question. Last time we covered the necessary background material to help you understand how the SQL Server Operating System schedules its many active threads, and the differences between its behavior and that of the Windows operating system's scheduler. We also discussed some of the variations on the theme of parallel processing. Today we'll take a look...(read more)

    Read the article

  • GLSL vertex shaders with movements vs vertex off the screen

    - by user827992
    If i have a vertex shader that manage some movements and variations about the position of some vertex in my OpenGL context, OpenGL is smart enough to just run this shader on only the vertex visible on the screen? This part of the OpenGL programmable pipeline is not clear to me because all the sources are not really really clear about this, they talk about fragments and pixels and I get that, but what about vertex shaders? If you need a reference i'm reading from this right now and this online book has a couple of examples about this.

    Read the article

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • Managing Data Growth in SQL Server

    'Help, my database ate my disk drives!'. Many DBAs spend most of their time dealing with variations of the problem of database processes consuming too much disk space. This happens because of errors such as incorrect configurations for recovery models, data growth for large objects and queries that overtax TempDB resources. Rodney describes, with some feeling, the errors that can lead to this sort of crisis for the working DBA, and their solution.

    Read the article

  • I am having a error with installing everything!

    - by Justin Baskaran
    Everytime I go to install wine, playonlinux I get this error:![enter image description here][1] Then I get other variations too but this altogether... People on other posts say I have to do a clean install is this right...I dont want to but if I have too I will... Pakcage dependencies cannot be resolved This error could be caused by required additonal software pakcages... details the following packages have unment dependencies: playonlinux

    Read the article

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • Is there any copyleft (GPL-like) license with both the Affero and Lesser modifications?

    - by Ben Voigt
    Looking for a license that covers public network service, like AGPLv3, but like LGPL isn't infectious. Basically I wrote some useful helper functions I want to allow to be used in any work, including closed-source software, but I want to require improvements to MY CODE to be released back to me and the general public. Can you recommend a suitable license? It should also include some of the other AGPL-permitted restrictions (attribution, indemnity), either in the license text or as permitted variations.

    Read the article

  • Weird Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days into the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • Some Techniques For SEO

    Search Engine Opimization has now become an essential for online business. The gigantic variations of SEO helps to decide whether a website is hit or flop. But the main difference lies in some keywords, phrases and meta-keywords.

    Read the article

  • Increase Your Profits With SEO

    The latest buzz about SEO is how it relates to CRO or Conversion Rate Optimization. Activity centers around the introduction and measurement of webpage variations to find out which styles, formats and content pieces generate the most desirable visitor behavior.

    Read the article

  • Fastest way to check if two square 2D arrays are rotationally and reflectively distinct

    - by kustrle
    The best idea I have so far is to rotate first array by {0, 90, 180, 270} degrees and reflect it horizontally or/and vertically. We basically get 16 variations [1] of first array and compare them with second array. if none of them matches the two arrays are rotationally and reflectively distinct. I am wondering if there is more optimal solution than this brute-force approach? [1] 0deg, no reflection 0deg, reflect over x 0deg, reflect over y 0deg, reflect over x and y 90deg, no reflection ...

    Read the article

  • 10 Reasons Why Java is the Top Embedded Platform

    - by Roger Brinkley
    With the release of Oracle ME Embedded 3.2 and Oracle Java Embedded Suite, Java is now ready to fully move into the embedded developer space, what many have called the "Internet of Things". Here are 10 reasons why Java is the top embedded platform. 1. Decouples software development from hardware development cycle Development is typically split between both hardware and software in a traditional design flow . This leads to complicated co-design and requires prototype hardware to be built. This parallel and interdependent hardware / software design process typically leads to two or more re-development phases. With Embedded Java, all specific work is carried out in software, with the (processor) hardware implementation fully decoupled. This with eliminate or at least reduces the need for re-spins of software or hardware and the original development efforts can be carried forward directly into product development and validation. 2. Development and testing can be done (mostly) using standard desktop systems through emulation Because the software and hardware are decoupled it now becomes easier to test the software long before it reaches the hardware through hardware emulation. Emulation is the ability of a program in an electronic device to imitate another program or device. In the past Java tools like the Java ME SDK and the SunSPOTs Solarium provided developers with emulation for a complete set of mobile telelphones and SunSpots. This often included network interaction or in the case of SunSPOTs radio communication. What emulation does is speed up the development cycle by refining the software development process without the need of hardware. The software is fixed, redefined, and refactored without the timely expense of hardware testing. With tools like the Java ME 3.2 SDK, Embedded Java applications can be be quickly developed on Windows based platforms. In the end of course developers should do a full set of testing on the hardware as incompatibilities between emulators and hardware will exist, but the amount of time to do this should be significantly reduced. 3. Highly productive language, APIs, runtime, and tools mean quick time to market Charles Nutter probably said it best in twitter blog when he tweeted, "Every time I see a piece of C code I need to port, my heart dies a little. Then I port it to 1/4 as much Java, and feel better." The Java environment is a very complex combination of a Java Virtual Machine, the Java Language, and it's robust APIs. Combine that with the Java ME SDK for small devices or just Netbeans for the larger devices and you have a development environment where development time is reduced significantly meaning the product can be shipped sooner. Of course this is assuming that the engineers don't get slap happy adding new features given the extra time they'll have.  4. Create high-performance, portable, secure, robust, cross-platform applications easily The latest JIT compilers for the Oracle JVM approach the speed of C/C++ code, and in some memory allocation intensive circumstances, exceed it. And specifically for the embedded devices both ME Embedded and SE Embedded have been optimized for the smaller footprints.  In portability Java uses Bytecode to make the language platform independent. This creates a write once run anywhere environment that allows you to develop on one platform and execute on others and avoids a platform vendor lock in. For security, Java achieves protection by confining a Java program to a Java execution environment and not allowing it to access other parts of computer.  In variety of systems the program must execute reliably to be robust. Finally, Oracle Java ME Embedded is a cross-industry and cross-platform product optimized in release version 3.2 for chipsets based on the ARM architectures. Similarly Oracle Java SE Embedded works on a variety of ARM V5, V6, and V7, X86 and Power Architecture Linux. 5. Java isolates your apps from language and platform variations (e.g. C/C++, kernel, libc differences) This has been a key factor in Java from day one. Developers write to Java and don't have to worry about underlying differences in the platform variations. Those platform variations are being managed by the JVM. Gone are the C/C++ problems like memory corruptions, stack overflows, and other such bugs which are extremely difficult to isolate. Of course this doesn't imply that you won't be able to get away from native code completely. There could be some situations where you have to write native code in either assembler or C/C++. But those instances should be limited. 6. Most popular embedded processors supported allowing design flexibility Java SE Embedded is now available on ARM V5, V6, and V7 along with Linux on X86 and Power Architecture platforms. Java ME Embedded is available on system based on ARM architecture SOCs with low memory footprints and a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN). 7. Support for key embedded features (low footprint, power mgmt., low latency, etc) All embedded devices by there very nature are constrained in some way. Economics may dictate a device with a less RAM and ROM. The CPU needs can dictate a less powerful device. Power consumption is another major resource in some embedded devices as connecting to consistent power source not always desirable or possible. For others they have to constantly on. Often many of these systems are headless (in the embedded space it's almost always Halloween).  For memory resources ,Java ME Embedded can run in environment as low as 130KB RAM/350KB ROM for a minimal, customized configuration up to 700KB RAM/1500KB ROM for the full, standard configuration. Java SE Embedded is designed for environments starting at 32MB RAM/39MB  ROM. Key functionality of embedded devices such as auto-start and recovery, flexible networking are fully supported. And while Java SE Embedded has been optimized for mid-range to high-end embedded systems, Java ME Embedded is a Java runtime stack optimized for small embedded systems. It provides a robust and flexible application platform with dedicated embedded functionality for always-on, headless (no graphics/UI), and connected devices. 8. Leverage huge Java developer ecosystem (expertise, existing code) There are over 9 million developers in world that work on Java, and while not all of them work on embedded systems, their wealth of expertise in developing applications is immense. In short, getting a java developer to work on a embedded system is pretty easy, you probably have a java developer living in your subdivsion.  Then of course there is the wealth of existing code. The Java Embedded Community on Java.net is central gathering place for embedded Java developers. Conferences like Embedded Java @ JavaOne and the a variety of hardware vendor conferences like Freescale Technlogy Forums offer an excellent opportunity for those interested in embedded systems. 9. Easily create end-to-end solutions integrated with Java back-end services In the "Internet of Things" things aren't on an island doing an single task. For instance and embedded drink dispenser doesn't just dispense a beverage, but could collect money from a credit card and also send information about current sales. Similarly, an embedded house power monitoring system doesn't just manage the power usage in a house, but can also send that data back to the power company. In both cases it isn't about the individual thing, but monitoring a collection of  things. How much power did your block, subdivsion, area of town, town, county, state, nation, world use? How many Dr Peppers were purchased from thing1, thing2, thingN? The point is that all this information can be collected and transferred securely  (and believe me that is key issue that Java fully supports) to back end services for further analysis. And what better back in service exists than a Java back in service. It's interesting to note that on larger embedded platforms that support the Java Embedded Suite some of the analysis might be done on the embedded device itself as JES has a glassfish server and Java Database as part of the installation. The result is an end to end Java solution. 10. Solutions from constrained devices to server-class systems Just take a look at some of the embedded Java systems that have already been developed and you'll see a vast range of solutions. Livescribe pen, Kindle, each and every Blu-Ray player, Cisco's Advanced VOIP phone, KronosInTouch smart time clock, EnergyICT smart metering, EDF's automated meter management, Ricoh Printers, and Stanford's automated car  are just a few of the list of embedded Java implementation that continues to grow. Conclusion Now if your a Java Developer you probably look at some of the 10 reasons and say "duh", but for the embedded developers this is should be an eye opening list. And with the release of ME Embedded 3.2 and the Java Embedded Suite the embedded developers life is now a whole lot easier. For the Java developer your employment opportunities are about to increase. For both it's a great time to start developing Java for the "Internet of Things".

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • How to encrypt Amazon CloudFront signature for private content access using canned policy

    - by Chet
    Has anyone using .net actually worked out how to successfully sign a signature to use with CloudFront private content? After a couple of days of attempts all I can get is Access Denied. I have been working with variations of the following code and also tried using OpenSSL.Net and AWSSDK but that does not have a sign method for RSA-SHA1 yet. The signature (data) looks like this {"Statement":[{"Resource":"http://xxxx.cloudfront.net/xxxx.jpg","Condition":?{"DateLessThan":?{"AWS:EpochTime":1266922799}}}]} This method attempts to sign the signature for use in the canned url. So of the variations have included chanding the padding used in the has and also reversing the byte[] before signing as apprently OpenSSL do it this way. public string Sign(string data) { using (SHA1Managed SHA1 = new SHA1Managed()) { RSACryptoServiceProvider provider = new RSACryptoServiceProvider(); RSACryptoServiceProvider.UseMachineKeyStore = false; // Amazon PEM converted to XML using OpenSslKey provider.FromXmlString("<RSAKeyValue><Modulus>....."); byte[] plainbytes = System.Text.Encoding.UTF8.GetBytes(data); byte[] hash = SHA1.ComputeHash(plainbytes); //Array.Reverse(sig); // I have see some examples that reverse the hash byte[] sig = provider.SignHash(hash, "SHA1"); return Convert.ToBase64String(sig); } } Its useful to note that I have verified the content is setup correctly in S3 and CloudFront by generating a CloudFront canned policy url using my CloudBerry Explorer. How do they do it? Any ideas would be much appreciated. Thanks

    Read the article

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • Using YQL multi-query & XPath to parse HTML, how to escape nested quotes?

    - by Tivac
    The title is more complicated than it has to be, here's the problem query. SELECT * FROM query.multi WHERE queries=" SELECT * FROM html WHERE url='http://www.stumbleupon.com/url/http://www.guildwars2.com' AND xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span'; SELECT * FROM xml WHERE url='http://services.digg.com/1.0/endpoint?method=story.getAll&link=http://www.guildwars2.com'; SELECT * FROM json WHERE url='http://api.tweetmeme.com/url_info.json?url=http://www.guildwars2.com'; SELECT * FROM xml WHERE url='http://api.facebook.com/restserver.php?method=links.getStats&urls=http://www.guildwars2.com'; SELECT * FROM json WHERE url='http://www.reddit.com/button_info.json?url=http://www.guildwars2.com'" Specifically this line, xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span' It's problematic because of the quoting, I have to nest them three levels deep and I've run out of quote characters to use. I've tried the following variations without success: //no attribute quoting xpath='//li[@class=listLi]/div[@class=views]/a/span' //try to quote attribute w/ backslash & single quote xpath='//li[@class=\'listLi\']/div[@class=\'views\']/a/span' //try to quote attribute w/ backslash & double quote xpath='//li[@class=\"listLi\"]/div[@class=\"views\"]/a/span' //try to quote attribute with double single quotes, like SQL xpath='//li[@class=''listLi'']/div[@class=''views'']/a/span' //try to quote attribute with double double quotes, like SQL xpath='//li[@class=""listLi""]/div[@class=""views""]/a/span' //try to quote attribute with quote entities xpath='//li[@class=&quot;listLi&quot;]/div[@class=&quot;views&quot;]/a/span' //try to surround XPath with backslash & double quote xpath=\"//li[@class='listLi']/div[@class='views']/a/span\" //try to surround XPath with double double quote xpath=""//li[@class='listLi']/div[@class='views']/a/span"" All without success. I don't see much out there about escaping XPath strings but everything I've found seems to be variations on using concat (which won't help because neither ' nor " are available) or html entities. Not using quotes for the attributes doesn't throw an error but fails because it's not the actual XPath string I need. I don't see anything in the YQL docs about how to handle escaping. I'm aware of how edge-casey this is but was hoping they'd have some sort of escaping guide.

    Read the article

  • Problem: writing parameter values to data driven MSTEST output

    - by Shubh
    Hi, I am trying to extract some information about the parameter variants used in an MSTEST data driven test case from trx file. Currently, For data driven tests, I get the output of same testcase with different inputs as a sequence of tags , but there is no info about the value of the variants. Example: Suppose we have a [data driven]TestMethod1() and the data rows contain variations a and b. There are two variations a=1,b=2 for which the test passes and a=3,b=4 for which the test fails. If we can output the info that it was a=1,b=2 which passed and a=3 b=4 which failed in the trx file; the output will be meaningful. Better information about test case runs from the output file alone(without any dependencies). Investigating the test failure without rerunning the whole set If the data rows change in data source(now a=1,b=2 pass and a=5,b=6 fail) , easy to decipher that the errors are different; although the fail sequence is still the same(row 0 pass row 1 fail but now row1 is different) Has any of you gone through a similar problem? What did you follow? I tried to put the parameter value information in the Description attribute of TestMethod, it didnt work. Any other methods you think can work too? thanks, Shubhankar

    Read the article

  • How can I prevent text displacement for some foreign language fonts?

    - by weltraumpirat
    I have a multilingual project (currently 13 languages), which uses many different font variations of "Helvetica Neue", mostly bold, condensed and regular cuts from the LinoType Pro font set ( which includes western european characters) and the same for cyrillic. We will probably add chinese and japanese variations in the future. I have set up the project to use different CSS stylesheets and separately load the fonts for each version, depending on which language the user selects, so I can have different line heights, kerning and/or font sizes to make everything keep the original look, even if the fonts look nothing alike. All of this works well, except for one problem: For some reason, all cyrillic letters seem to be displaced. They appear 2-3 pixels below the correct base line, and actually protrude across the textfield's bottom border, even when the field is set to autosize. When I use textfield.getCharBoundaries(), all values seem to be correct, even though they obviously aren't rendered correctly. To make everything look neat, I could of course manually move all problematic textfields up or down according to language and font size, but I was wondering if there was some way to prevent or at least detect this kind of displacement in order to automatically handle the adjustments - the Flash Player should have some sort of information on how things are rendered, shouldn't it? Have any of you had similar problems? Or better yet: a solution?

    Read the article

  • how to drop_caches in OpenVZ centos6 container

    - by Omar Jackman
    I have tried sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' sudo echo 3 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches and a bunch of other variations but with every try I get bash: /proc/sys/vm/drop_caches: Permission denied How do I clear the ram used for buffers/cache in my centos6 openvz container? It seems like the only way to do what I need is to reboot the container.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >