Search Results

Search found 12919 results on 517 pages for 'tool pack'.

Page 202/517 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Android ADB has moved and Eclipse is looking in the old place

    - by Peter Nelson
    I did an SDK update last night and it moved adb.exe. In its place it left a file called "adb_has_moved.txt" saying The adb tool has moved to platform-tools/ If you don't see this directory in your SDK, launch the SDK and AVD Manager (execute the android tool) and install "Android SDK Platform-tools" Please also update your PATH environment variable to include the platform-tools/ directory, so you can execute adb from any location. So I did all that, including the PATH and now I can start adb.exe from any DOS prompt. But I still can't start it from Eclipse (Galileo 3.52). When I try it says Location of the Android SDK has not been set up in the preferences ... which is not true. The SDK IS set up in Preferences. The real problem is at the top of the Preferences window where it says "Could not find C:\SDK\android-sdk-windows\ tools \adb.exe!" ...No kidding - the update moved it to C:\SDK\android-sdk-windows\ platform-tools. Because it's specifying a specific (wrong) path Eclipse is bypassing the PATH variable. So how do I get Eclipse to look in the right place?

    Read the article

  • Will Apple bundle the Mono Touch runtime with every iPhone?

    - by Zoran Simic
    It strikes me as a good idea for Apple to negotiate with Novell and bundle the Mono Touch runtime (only the runtime of course) into every iPhone and iPod Touch. Perhaps even make it a "one time install" that automatically gets downloaded from the App Store the first time one downloads an app build with Mono Touch, making every subsequent Mono Touch app much lighter to download (without the runtime). Doing so would be similar in a way to adding Bootcamp to OS X: it would make it easier for C# developers to join the party, but that wouldn't mean these developers will all stick to C#... What convinced me to buy a Mac is Bootcamp - I figured I could always install Windows if I didn't like OS X (and I liked the hardware, so no problem there). 6 months later, I'm using OS X full time... Would there be any technical issues in doing so? I see only advantages for all parties, not one disadvantage to anyone (except maybe for the few unfortunate Apple employees who would have to test the crap out of the Mono Touch runtime before bundling it): Novell wins because Mono Touch becomes much more viable (Mono Touch apps become much lighter all of the sudden) Developers win because now there's one more tool in the tool belt Many C# Developers would be very interested by this Apple wins because that would bring even more attention to the platform, more revenue in developer fees, more potential great apps, etc Users win because less space is used by different Apps having copies of the same runtime accumulating on their devices Would there be a major technical obstacle in bundling Mono Touch to iPhone OS? Edit: Changed the title from "Should" to "Will Apple bundle the runtime?", I think the consensus on predicting that means a lot to those considering going with Mono Touch.

    Read the article

  • Png image processing in .NET.

    - by Oybek
    I have the following task. Take a base image and overlay on it another one. The base image is 8b png as well as overlay. Here are the base (left) and overlay (right) images. Here is a result and how it must look. The picture in the left is a screenshot when one picture is on top of another (html and positioning) and the second is the result of programmatic merging. As you can see in the screenshot the borders of the text is darker. Also here are the sizes of the images Base image 14.9 KB Overlay image 6.87 KB Result image 34.8 KB The size of the resulting image is also huge Here is my code that I use to merge those pictures /*...*/ public Stream Concatinate(Stream baseStream, params Stream[] overlayStreams) { var @base = Image.FromStream(baseStream); var canvas = new Bitmap(@base.Width, @base.Height); using (var g = canvas.ToGraphics()) { g.DrawImage(@base, 0, 0); foreach (var item in overlayStreams) { using (var overlayImage = Image.FromStream(item)) { try { Overlay(@base, overlayImage, g); } catch { } } } } var ms = new MemoryStream(); canvas.Save(ms, ImageFormat.Png); canvas.Dispose(); @base.Dispose(); return ms; } /*...*/ /*Tograpics extension*/ public static Graphics ToGraphics(this Image image, CompositingQuality compositingQuality = CompositingQuality.HighQuality, SmoothingMode smoothingMode = SmoothingMode.HighQuality, InterpolationMode interpolationMode = InterpolationMode.HighQualityBicubic) { var g = Graphics.FromImage(image); g.CompositingQuality = compositingQuality; g.SmoothingMode = smoothingMode; g.InterpolationMode = interpolationMode; return g; } My questions are What should I do in order to merge images to achieve the result as in the screenshot? How can I lower the size of the result image? Is the System.Drawing a suitable tool for this or is there any better tool for working with png for .NET?

    Read the article

  • Helpful advice on developing a professional MS Word add-on

    - by Dan Tao
    A few months back I put together a simple proof-of-concept piece of software for a small firm with an idea for a document editing tool. The company wanted this tool to be integrated into Microsoft Word, understandably, to maximize its accessibility to the average user. I essentially wrote the underlying library with all of the core functionality as a C# project, and then used VSTO to get it running inside of Word. It felt like a bit of a duct tape solution, really; but then, I have (practically) zero experience developing tools for integration with MS Office, and it was only a proof of concept anyway. Well, the firm was quite pleased with my work overall, and they're looking to move from "proof of concept" to the real deal. Fortunately, as I said, the core functionality is all there and will only need to be somewhat tweaked and enhanced. My main concern is figuring out how to put together an application that will integrate with MS Word in a clean and polished way, and which can be deployed easily in accordance with a regular user's expectations (i.e., simply running an install program and voila, it's there in Word). I seem to remember reading somewhere that nobody uses VSTO for real professional projects. Is this true? False? What are the alternatives? And what are the tips and gotchas that I should be aware of before getting started on this issue of MS Word integration?

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • C#/.NET library for source code formatting, like the one used by Stack Overflow?

    - by Lasse V. Karlsen
    I am writing a command line tool to convert Markdown text to html output, which seems easy enough. However, I am wondering how to get nice syntax coloring for embedded code blocks, like the one used by Stack Overflow. Does anyone know either: What library StackOverflow is using or if there's a library out there that I can easily reuse? Basically it would need to have some of the same "intelligence" found in the one that Stack Overflow uses, by basically doing a best-attempt at figuring out the language in use to pick the right keywords. Basically, what I want is for my own program to handle a block like the following: if (a == 0) return true; if (a == 1) return false; // fall-back Markdown Sharp, the library I'm using, by default outputs the above as a simple pre/code html block, with no syntax coloring. I'd like the same type of handling as the formatting on Stack Overflow does, the above contains blue "return" keywords for example. Or, hmm, after checking the source of this Stack Overflow page after adding the code example, I notice that it too is formatted like a simple pre/code block. Is it pure javascript-magic at works here, so perhaps there's no such library? If there's no library that will automagically determine a possible language by the keywords used, is there one that would work if I explicitly told it the language? Since this is "my" markdown-commandline-tool, I can easily add syntax if I need to.

    Read the article

  • help me improve my sse yuv to rgb ssse3 code

    - by David McPaul
    Hello, I am looking to optimise some sse code I wrote for converting yuv to rgb (both planar and packed yuv functions). i am using SSSE3 at the moment but if there are useful functions from later sse versions thats ok. I am mainly interested in how I would work out processor stalls and the like. Anyone know of any tools that do static analysis of sse code? ; ; Copyright (C) 2009-2010 David McPaul ; ; All rights reserved. Distributed under the terms of the MIT License. ; ; A rather unoptimised set of ssse3 yuv to rgb converters ; does 8 pixels per loop ; inputer: ; reads 128 bits of yuv 8 bit data and puts ; the y values converted to 16 bit in xmm0 ; the u values converted to 16 bit and duplicated into xmm1 ; the v values converted to 16 bit and duplicated into xmm2 ; conversion: ; does the yuv to rgb conversion using 16 bit integer and the ; results are placed into the following registers as 8 bit clamped values ; r values in xmm3 ; g values in xmm4 ; b values in xmm5 ; outputer: ; writes out the rgba pixels as 8 bit values with 0 for alpha ; xmm6 used for scratch ; xmm7 used for scratch %macro cglobal 1 global _%1 %define %1 _%1 align 16 %1: %endmacro ; conversion code %macro yuv2rgbsse2 0 ; u = u - 128 ; v = v - 128 ; r = y + v + v >> 2 + v >> 3 + v >> 5 ; g = y - (u >> 2 + u >> 4 + u >> 5) - (v >> 1 + v >> 3 + v >> 4 + v >> 5) ; b = y + u + u >> 1 + u >> 2 + u >> 6 ; subtract 16 from y movdqa xmm7, [Const16] ; loads a constant using data cache (slower on first fetch but then cached) psubsw xmm0,xmm7 ; y = y - 16 ; subtract 128 from u and v movdqa xmm7, [Const128] ; loads a constant using data cache (slower on first fetch but then cached) psubsw xmm1,xmm7 ; u = u - 128 psubsw xmm2,xmm7 ; v = v - 128 ; load r,b with y movdqa xmm3,xmm0 ; r = y pshufd xmm5,xmm0, 0xE4 ; b = y ; r = y + v + v >> 2 + v >> 3 + v >> 5 paddsw xmm3, xmm2 ; add v to r movdqa xmm7, xmm1 ; move u to scratch pshufd xmm6, xmm2, 0xE4 ; move v to scratch psraw xmm6,2 ; divide v by 4 paddsw xmm3, xmm6 ; and add to r psraw xmm6,1 ; divide v by 2 paddsw xmm3, xmm6 ; and add to r psraw xmm6,2 ; divide v by 4 paddsw xmm3, xmm6 ; and add to r ; b = y + u + u >> 1 + u >> 2 + u >> 6 paddsw xmm5, xmm1 ; add u to b psraw xmm7,1 ; divide u by 2 paddsw xmm5, xmm7 ; and add to b psraw xmm7,1 ; divide u by 2 paddsw xmm5, xmm7 ; and add to b psraw xmm7,4 ; divide u by 32 paddsw xmm5, xmm7 ; and add to b ; g = y - u >> 2 - u >> 4 - u >> 5 - v >> 1 - v >> 3 - v >> 4 - v >> 5 movdqa xmm7,xmm2 ; move v to scratch pshufd xmm6,xmm1, 0xE4 ; move u to scratch movdqa xmm4,xmm0 ; g = y psraw xmm6,2 ; divide u by 4 psubsw xmm4,xmm6 ; subtract from g psraw xmm6,2 ; divide u by 4 psubsw xmm4,xmm6 ; subtract from g psraw xmm6,1 ; divide u by 2 psubsw xmm4,xmm6 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,2 ; divide v by 4 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g %endmacro ; outputer %macro rgba32sse2output 0 ; clamp values pxor xmm7,xmm7 packuswb xmm3,xmm7 ; clamp to 0,255 and pack R to 8 bit per pixel packuswb xmm4,xmm7 ; clamp to 0,255 and pack G to 8 bit per pixel packuswb xmm5,xmm7 ; clamp to 0,255 and pack B to 8 bit per pixel ; convert to bgra32 packed punpcklbw xmm5,xmm4 ; bgbgbgbgbgbgbgbg movdqa xmm0, xmm5 ; save bg values punpcklbw xmm3,xmm7 ; r0r0r0r0r0r0r0r0 punpcklwd xmm5,xmm3 ; lower half bgr0bgr0bgr0bgr0 punpckhwd xmm0,xmm3 ; upper half bgr0bgr0bgr0bgr0 ; write to output ptr movntdq [edi], xmm5 ; output first 4 pixels bypassing cache movntdq [edi+16], xmm0 ; output second 4 pixels bypassing cache %endmacro SECTION .data align=16 Const16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 Const128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 UMask db 0x01 db 0x80 db 0x01 db 0x80 db 0x05 db 0x80 db 0x05 db 0x80 db 0x09 db 0x80 db 0x09 db 0x80 db 0x0d db 0x80 db 0x0d db 0x80 VMask db 0x03 db 0x80 db 0x03 db 0x80 db 0x07 db 0x80 db 0x07 db 0x80 db 0x0b db 0x80 db 0x0b db 0x80 db 0x0f db 0x80 db 0x0f db 0x80 YMask db 0x00 db 0x80 db 0x02 db 0x80 db 0x04 db 0x80 db 0x06 db 0x80 db 0x08 db 0x80 db 0x0a db 0x80 db 0x0c db 0x80 db 0x0e db 0x80 ; void Convert_YUV422_RGBA32_SSSE3(void *fromPtr, void *toPtr, int width) width equ ebp+16 toPtr equ ebp+12 fromPtr equ ebp+8 ; void Convert_YUV420P_RGBA32_SSSE3(void *fromYPtr, void *fromUPtr, void *fromVPtr, void *toPtr, int width) width1 equ ebp+24 toPtr1 equ ebp+20 fromVPtr equ ebp+16 fromUPtr equ ebp+12 fromYPtr equ ebp+8 SECTION .text align=16 cglobal Convert_YUV422_RGBA32_SSSE3 ; reserve variables push ebp mov ebp, esp push edi push esi push ecx mov esi, [fromPtr] mov edi, [toPtr] mov ecx, [width] ; loop width / 8 times shr ecx,3 test ecx,ecx jng ENDLOOP REPEATLOOP: ; loop over width / 8 ; YUV422 packed inputer movdqa xmm0, [esi] ; should have yuyv yuyv yuyv yuyv pshufd xmm1, xmm0, 0xE4 ; copy to xmm1 movdqa xmm2, xmm0 ; copy to xmm2 ; extract both y giving y0y0 pshufb xmm0, [YMask] ; extract u and duplicate so each u in yuyv becomes u0u0 pshufb xmm1, [UMask] ; extract v and duplicate so each v in yuyv becomes v0v0 pshufb xmm2, [VMask] yuv2rgbsse2 rgba32sse2output ; endloop add edi,32 add esi,16 sub ecx, 1 ; apparently sub is better than dec jnz REPEATLOOP ENDLOOP: ; Cleanup pop ecx pop esi pop edi mov esp, ebp pop ebp ret cglobal Convert_YUV420P_RGBA32_SSSE3 ; reserve variables push ebp mov ebp, esp push edi push esi push ecx push eax push ebx mov esi, [fromYPtr] mov eax, [fromUPtr] mov ebx, [fromVPtr] mov edi, [toPtr1] mov ecx, [width1] ; loop width / 8 times shr ecx,3 test ecx,ecx jng ENDLOOP1 REPEATLOOP1: ; loop over width / 8 ; YUV420 Planar inputer movq xmm0, [esi] ; fetch 8 y values (8 bit) yyyyyyyy00000000 movd xmm1, [eax] ; fetch 4 u values (8 bit) uuuu000000000000 movd xmm2, [ebx] ; fetch 4 v values (8 bit) vvvv000000000000 ; extract y pxor xmm7,xmm7 ; 00000000000000000000000000000000 punpcklbw xmm0,xmm7 ; interleave xmm7 into xmm0 y0y0y0y0y0y0y0y0 ; extract u and duplicate so each becomes 0u0u punpcklbw xmm1,xmm7 ; interleave xmm7 into xmm1 u0u0u0u000000000 punpcklwd xmm1,xmm7 ; interleave again u000u000u000u000 pshuflw xmm1,xmm1, 0xA0 ; copy u values pshufhw xmm1,xmm1, 0xA0 ; to get u0u0 ; extract v punpcklbw xmm2,xmm7 ; interleave xmm7 into xmm1 v0v0v0v000000000 punpcklwd xmm2,xmm7 ; interleave again v000v000v000v000 pshuflw xmm2,xmm2, 0xA0 ; copy v values pshufhw xmm2,xmm2, 0xA0 ; to get v0v0 yuv2rgbsse2 rgba32sse2output ; endloop add edi,32 add esi,8 add eax,4 add ebx,4 sub ecx, 1 ; apparently sub is better than dec jnz REPEATLOOP1 ENDLOOP1: ; Cleanup pop ebx pop eax pop ecx pop esi pop edi mov esp, ebp pop ebp ret SECTION .note.GNU-stack noalloc noexec nowrite progbits

    Read the article

  • How can I compare market data feed sources for quality and latency improvement?

    - by yves Baumes
    I am in the very first stages of implementing a tool to compare 2 market data feed sources in order to prove the quality of new developed sources to my boss ( meaning there are no regressions, no missed updates, or wrong ), and to prove latencies improvement. So the tool I need must be able to check updates differences as well as to tell which source is the best (in term of latency). Concrectly, reference source could be Reuters while the other one is a Feed handler we develop internally. People warned me that updates might not arrive in the same order as Reuters implementation could differs totally from ours. Therefore a simple algorithm based on the fact that updates could arrive in the same order is likely not to work. My very first idea would be to use fingerprint to compare feed sources, as Shazaam application does to find the title of the tube you are submitting. Google told me it is based on FFT. And I was wondering if signal processing theory could behaves well with market access applications. I wanted to know your own experience in that field, is that possible to develop a quite accurate algorithm to meet the needs? What was your own idea? What do you think about fingerprint based comparison?

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • What are the common compliance standards for software products?

    - by Jay
    This is a very generic question about software products. I would like to know what compliance standards are applicable to any software product. I know that question gives away nothing. So, here is an example to what I am referring to. CiSecurity Security Certification/Compliance lists out products ceritified by them to be compliant to the standards published at their website, i.e, cisecurity.org. Compliance could be as simple as answering a questionnaire for your product and approved by a thirdparty like cisecurity or it could apply to your whole organization, for instance, PCI-DSS compliance. I would be very interested in knowing the standards that products you know/designed/created, comply to. To give you the context behind this question: I am the developer of a data-masking tool. The said tool helps mask onscreen html text in a banking web application using filters. So, for instance, if the bank application lists out user information with ssn, my product when integrated with the banking product, automatically identifies ssn pattern and masks it into a pre-defined format.So, I have my product marketing team wanting more buzz words like compliance to be able to sell it to more banking clients. Hence, understanding "compliances that apply to products" is a key research item for me at this point. By which I meant, security compliances. Appreciate all your help and suggestions.

    Read the article

  • Is there any algorithm that can solve ANY traditional sudoku puzzles, WITHOUT guessing (or similar techniques)?

    - by justin
    Is there any algorithm that solves ANY traditional sudoku puzzle, WITHOUT guessing? Here Guessing means trying an candidate and see how far it goes, if a contradiction is found with the guess, backtracking to the guessing step and try another candidate; when all candidates are exhausted without success, backtracking to the previous guessing step (if there is one; otherwise the puzzle proofs invalid.), etc. EDIT1: Thank you for your replies. traditional sudoku means 81-box sudoku, without any other constraints. Let us say the we know the solution is unique, is there any algorithm that can GUARANTEE to solve it without backtracking? Backtracking is a universal tool, I have nothing wrong with it but, using a universal tool to solve sudoku decreases the value and fun in deciphering (manually, or by computer) sudoku puzzles. How can a human being solve the so called "the hardest sudoku in the world", does he need to guess? I heard some researcher accidentally found that their algorithm for some data analysis can solve all sudoku. Is that true, do they have to guess too?

    Read the article

  • After Delete Trigger Fires Only After Delete?

    - by Brandi
    I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation... I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it. By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete. So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone? All the trigger looks like is this: ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger] GO The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.

    Read the article

  • SOLR not searching on certain fields

    - by andy
    hey guys, just installed solr, edited the schema.xml, and am now trying to index it and search on it with some test data. In the XML file I'm sending to SOLR, one of my fields look like this: <field name="PageContent"><![CDATA[<p>some text in a paragrah tag</p>]]></field> There's HTML there, so I've wrapped it in CDATA. In my SOLR schema.xml, the definition for that field looks like this: <field name="PageContent" type="text" indexed="true" stored="true"/> When I ran the POSTing tool, everything went ok, but when I search for content which I know is inside the PageContent field, I get no results. However, when I set the node to PageContent, it works. But if I set it to any other field, it doesn't search in PageContent. Am I doing something wrong? what's the issue? thanks very much for any help cheers! UPDATE Just to clarify on the error. I've uploaded a "doc" with the following data: <field name="PageID">928</field> <field name="PageName">some name</field> <field name="PageContent"><![CDATA[<p>html content</p>]]></field> In my schema I've defined the fields as such: <field name="PageID" type="integer" indexed="true" stored="true" required="true"/> <field name="PageName" type="text" indexed="true" stored="true"/> <field name="PageContent" type="text" indexed="true" stored="true"/> And: <uniqueKey>PageID</uniqueKey> <defaultSearchField>PageName</defaultSearchField> Now, when I use the Solr admin tool and search for "some name" I get a result. But, if I search for "html content", or "html", or "content", or "928", I get no results why? cool, thanks!

    Read the article

  • [CODE GENERATION] How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS...

    Read the article

  • C# IE Proxy Detection not working from a Windows Service

    - by mytwocents
    This is very odd. I've written a windows form tool in C# to return the default IE proxy. It works (See code snippets below). I've created a service to return the proxy using the same code, it works when I step into the debugger, BUT it doesn't work when the service is running as binary code. This must be some sort of permissions or context thing that I cannot recreate. Further, if I call the tool from the windows service, it still doesn't return the proxy information.... this seems isolated to the service somehow. I don't see any permission errors in the events log, it's as if a service has a totally different context than a regular windows form exe. I've tride both: WebRequest.GetSystemWebProxy() and Uri requestUri = new Uri(urlOfDomain); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri); IWebProxy proxy = request.Proxy; if (!proxy.IsBypassed(requestUri)){ Uri uri2 = proxy.GetProxy(request.RequestUri); } Thanks for any help!

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • How do virtual destructors work?

    - by Prabhu
    Few hours back I was fiddling with a Memory Leak issue and it turned out that I really got some basic stuff about virtual destructors wrong! Let me put explain my class design. class Base { virtual push_elements() {} }; class Derived:public Base { vector<int> x; public: void push_elements(){ for(int i=0;i <5;i++) x.push_back(i); } }; void main() { Base* b = new Derived(); b->push_elements(); delete b; } The bounds checker tool reported a memory leak in the derived class vector. And I figured out that the destructor is not virtual and the derived class destructor is not called. And it surprisingly got fixed when I made the destructor virtual. Isn't the vector deallocated automatically even if the derived class destructor is not called? Is that a quirk in BoundsChecker tool or is my understanding of virtual destructor wrong?

    Read the article

  • Virtual destructor - How does it work?

    - by Prabhu
    Hello All, Few hours back I was fiddling with a Memory Leak issue and it turned out that I really got some basic stuff about virtual destructor wrong!! Let me put explain my class design. class Base { virtual push_elements()<br>{}<br> }; class Derived:public Base { vector<int> x; public: void push_elements(){ for(int i=0;i <5;i++) x.push_back(i); } }; void main() { Base* b = new Derived(); b->push_elements(); delete b; } The bounds checker tool reported a memory leak in the derived class vector. And I figured out that the destructor is not virtual and the derived class destructor is not called.And it surprisingly got fixed when I made the destructor virtual. But my question is "isn't the vector deallocated automatically even if the derived class destructor is not called"? Is that a quirk in BoundsChecker tool or is my understanding of virtual destructor is wrong:)

    Read the article

  • ObjectiveC - Releasing objects added as parameters

    - by NobleK
    Ok, here goes. Being a Java developer I'm still struggling with the memory management in ObjectiveC. I have all the basics covered, but once in a while I encounter a challenge. What I want to do is something which in Java would look like this: MyObject myObject = new MyObject(new MyParameterObject()); The constructor of MyObject class takes a parameter of type MyParameterObject which I initiate on-the-fly. In ObjectiveC I tried to do this using following code: MyObject *myObject = [[MyObject alloc] init:[[MyParameterObject alloc] init]]; However, running the Build and Analyze tool this gives me a "Potential leak of an object" warning for the MyParameter object which indeed occurs when I test it using Instruments. I do understand why this happens since I am taking ownership of the object with the alloc method and not relinquishing it, I just don't know the correct way of doing it. I tried using MyObject *myObject = [[MyObject alloc] init:[[[MyParameterObject alloc] init] autorelease]]; but then the Analyze tool told me that "Object sent -autorelease too many times". I could solve the issue by modifying the init method of MyParameterObject to say return [self autorelease]; in stead of just return self;. Analyze still warnes about a potential leak, but it doesn't actually occur. However I believe that this approach violates the convention for managing memory in ObjectiveC and I really want to do it the right way. Thanx in advance.

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • jQuery "best practices" is an oxymoron [closed]

    - by n00b
    jQuery "best practices", oxymoron for several reasons sadly To speak candidly, the entire jQuery library has a HIGHLY inconsistent API, and virtually no coding standards. It's basically a subset of JavaScript, a new language if you will, which is probably the most confusing part for normal javascript developers. In my opinion, use normal JavaScript where possible to save on memory consumption. Or, better yet, use a professional tool like YUI. If you already come from a javascript background, you will appreciate it much more than jQuery because it doesn't have n00b wrappers around everything. In tools like YUI, you interact directly with the native DOM to do things, instead of a super-jQuery object that tries to do everything. I'll get voted down for saying the truth. Don't get me wrong, jQuery is cool if you wanna throw something flashy up quick, but if you're going to build a larger app you're going to need a more refined tool (especially since jQuery leaks memory like no other once you start chaining too much things). jQuery does have one of the lowest learning curves, and you won't outgrow it for awhile, but when you do it will be highly apparent and tedious to merge to something else. over 60 years programming experience someone clean this up and community wiki please

    Read the article

  • how to achieve this single line css format....

    - by metal-gear-solid
    I tried many formats but this is best to find classes and IDs easily without using "Find". but it is good if width of editor is wide. I use this tool to format my css in single line http://www.newmediacampaigns.com/files/posts/css-formatting/clean.php #wrapper {width:800px; margin:0 auto;} #header {height:100px; position:relative;} #feature .post {width:490px; float:left;} #feature .link {width:195px; display:inline; margin:0 10px 0 0;float:right;border-bottom:1px solid red} #footer {clear:both; font-size:93%; float:none;} I need like this. see selector #feature .link i need if any selector has more than 3 properties then it should go at next line like in this example.because i work in cms where width of textbox where i write cssis not much wide. Is there any tool/software can format CSS like this. or after getting single line format can we this formatting for big lines. in automated way #wrapper {width:800px; margin:0 auto;} #header {height:100px; position:relative;} #feature .post {width:490px; float:left;} #feature .link {width:195px; display:inline; margin:0 10px 0 0; float:right;bottom:1px solid red} #footer {clear:both; font-size:93%; float:none;}

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >