Search Results

Search found 24094 results on 964 pages for 'image processing'.

Page 609/964 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • Reading Windows ACLs from Java

    - by Matt Sheppard
    From within a Java program, I want to be able to list out the Windows users and groups who have permission to read a given file. Obviously Java has no built-in ability to read the Windows ACL information out, so I'm looking for other solutions. Are there any third party libraries available which can provide direct access to the ACL information for a Windows file? Failing that, maybe running cacls and capturing and then processing the output would be a reasonable temporary solution - Is the output format of cacls thoroughly documented anywhere, and is it likely to change between versions of Windows?

    Read the article

  • How to lock non-browser clients from submitting a request?

    - by Thomas Kohl
    I want to block non-browser clients from accessing certain pages / successfully making a request. The website content is served to authenticated users. What happens is that our user gives his credentials to our website to 3rd party - it can be another website or a mobile application - that performs requests on his behalf. Say there is a form that the user fills out and sends a message. Can I protect this form so that the server processing the submission can tell whether the user has submitted it directly from the browser or not? I don't want to use CAPTCHA for usability reasons. Can I do it with some javascript?

    Read the article

  • 'e-Commerce' scalable database model

    - by Ruben Trancoso
    I would like to understand database scalability so I've just heard a talk about Habits of Highly Scalable Web Applications http://techportal.ibuildings.com/2010/03/02/habits-of-highly-scalable-web-applications/ On it, the presenter mainly talk about relational database scalability. I also have read something about MapReduce and Column oriented tables, big tables, hypertable etc... trying to understand which are the most up to date methods to scale web application data. But the second group, to me, is being hard to understand where it fits. It serves as transactional, reliable data store? or not, its just for large access and processing and to handle fine graned operations we will ever need to rely on RDBMSs? Could someone give a comprehensive landscape for those new technologies and how to use it?

    Read the article

  • Wireless barcode scanner

    - by Zinx
    Hi All, I have 2 wireless barcode scanners. I have created an application in C# which reads a barcode and sends data to a web service which then manipulates the data and do further processing. When I start aplication, it first tries to connect to web service and will proceed further only if connection succeded. The problem I am facing is, if I deploy the application through visual studio then it works fine and connects to web service. But if I just copy the contents (exe and config files) manually, then it gives error that unknown host name. Can someone please help me to understand how this connection works? Does it needs some special settings in scanner device which visual studio does automatically while deployment? Thanks and Cheers.

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • Reading chunked data from HttpEntity

    - by Gagan
    I have the following code: HttpClient FETCHER HttpResponse response = FETCHER.execute(host, httpMethod); Im trying to read its contents to a string like this: HttpEntity entity = response.getEntity(); InputStream st = entity.getContent(); StringWriter writer = new StringWriter(); IOUtils.copy(st, writer); String content = writer.toString(); The problem is, when i fetch http://www.google.co.in/ page, the transfer encoding is chunked, and i get only the first chunk. It fetches till first "". How do i get all the chunks at once so i can dump the complete output and do some processing on it ?

    Read the article

  • Why can't I use 'django-admin.py makemessages -l cn'

    - by zjm1126
    print : D:\zjm_code\register2>python D:\Python25\Lib\site-packages\django\bin\django-adm in.py makemessages -l cn Error: This script should be run from the Django SVN tree or your project or app tree. If you did indeed run it from the SVN checkout or your project or applica tion, maybe you are just missing the conf/locale (in the django tree) or locale (for project and application) directory? It is not created automatically, you ha ve to create it by hand if you want to enable i18n for your project or applicati on. 2.i made a locale directory ,and D:\zjm_code\register2>python D:\Python25\Lib\site-packages\django\bin\django-adm in.py makemessages -l cn processing language cn Error: errors happened while running xgettext on __init__.py 'xgettext' ?????????,????????? ??????? D:\Python25\lib\site-packages\django\core\management\base.py:234: RuntimeWarning : tp_compare didn't return -1 or -2 for exception sys.exit(1) 3. ok http://hi.baidu.com/zjm1126/blog/item/f28e09deced15353ccbf1a82.html

    Read the article

  • A very basic issue with routes in ruby

    - by Haris
    I am new to ruby and while creating a sample application found out an issue that whenever I go to http://127.0.0.1:3000/people/index by default show action is executed and index is taken as a parameter. This is server log: Started GET "/people/index" for 127.0.0.1 at 2010-12-23 18:43:01 +0500 Processing by PeopleController#show as HTML Parameters: {"id"=>"index"} I have this in my route file: root :to => "people#index" resources> :people match ':controller(/:action(/:id(.:format)))' What is going on here and how can I fix the issue?

    Read the article

  • jQuery droppable accordion

    - by awshepard
    I've been playing around with trying to create a droppable accordion for a little while, and haven't gotten it to be very responsive. When I drag an item over the accordion, it takes 5+ seconds for the accordion element to open (if it does at all). Sometimes I have to "wave" the dragged element over the accordion element. I know I read something a while back about event processing in javascript - something along the lines of the browser not always passing control to the javascript engine when you think it does, or something like that, resulting in weird timing. Has anyone else seen tried to do this before? Have you found jquery/javascript to be this slow? Do you have any references for how to get a responsive droppable accordion (the jQuery UI site doesn't seem to, and I didn't find anything on SO or Google). Thanks!

    Read the article

  • Stop 2 identical queries from executing almost simultaneously?

    - by James Simpson
    I have developed an AJAX based game where there is a bug caused (very remote, but in volume it happens at least once per hour) where for some reason two requests get sent to the processing page almost simultaneously (the last one I tracked, the requests were a difference of .0001 ms). There is a check right before the query is executed to make sure that it doesn't get executed twice, but since the difference is so small, the check hasn't finished before the next query gets executed. I'm stumped, how can I prevent this as it is causing serious problems in the game. Just to be more clear, the query is starting a new round in the game, so when it executes twice, it starts 2 rounds at the same time which breaks the game, so I need to be able to stop the script from executing if the previous round isn't over, even if that previous round started .0001 ms ago.

    Read the article

  • Problem with Sphinx resultset larger than 16 MB in MySQL

    - by gmemon
    Hello All, I am accessing a large indexed text dataset using sphinxse via MySQL. The size of resultset is on the order of gigabytes. However, I have noticed that MySQL stops the query with following error whenever the dataset is larger than 16MB: 1430 (HY000): There was a problem processing the query on the foreign data source. Data source error: bad searchd response length (length=16777523) length shows the length of resultset that offended MySQL. I have tried the same query with Sphinx's standalone search program. It works fine. I have tried all possible variables in both MySQL and Sphinx, but nothing is helping. I am using Sphinx 0.9.9 rc-2 and MySQL 5.1.46. Thanks

    Read the article

  • URLs with query stripped of ampersands appearing in error logs

    - by Jeremy DeGroot
    I've noticed a curious phenomena popping up in my error logs recently. If, as the result of processing a form, I redirect my users to the URL http://www.example.com/index.php?foo=bar&bar=baz, I will see the following two URLs in my log http://www.example.com/index.php?foo=barbar=baz http://www.example.com/index.php?foo=bar&bar=baz The first one is obviously incorrect and will cause my application to redirect to a 404. It always appears first, usually a second before the second one. The 404 page is not doing the redirection, so it appears that the browser is trying both versions. At first, looking at my server logs made me believe it affected only Firefox 3.6.3, but I've found an example of Safari being afflicted as well. It happens fairly intermittently, though it can occur multiple times in a users' session. I've never been able to get it to happen to me. Any thoughts as to the nature of the problem or a solution?

    Read the article

  • Kill a Perl system call after a timeout

    - by Fergal
    I've got a Perl script I'm using for running a file processing tool which is started using backticks. The problem is that occasionally the tool hangs and It needs to be killed in order for the rest of the files to be processed. Whats the best way best way to apply a timeout after which the parent script will kill the hung process? At the moment I'm using: foreach $file (@FILES) { $runResult = `mytool $file >> $file.log`; } But when mytool hangs after n seconds I'd like to be able to kill it and continue to the next file.

    Read the article

  • Which design pattern should I be using?

    - by Gabriel
    Here's briefly what I'm trying to do. The user supplies me with a link to a photo from one of several photo-sharing websites (such as Flickr, Zooomr, et. al). I then do some processing on the photo using their respective APIs. Right now, I'm only implementing one service, but I will most likely add more in the near future. I don't want to have a bunch of if/else or switch statements to define the logic for the different websites (but maybe that's necessary?) I'd rather just call GetImage(url) and have it get me the image from whatever service the url's domain is from. I'm confused how the GetImage function and classes should be designed. Maybe I need the strategy pattern? I'm still reading and trying to understand the various design patterns and how I could make one fit in this case. I'm doing this in C#, but this question is language-agnostic.

    Read the article

  • Need help with this Regex + UrlRewriter.NET please :)

    - by Pure.Krome
    Previously, on StackOverflow ... (Summarized) I need to capture all requests, for a particluar subdomain .. and rewrite their destination. Now, the trick to determining the host via regex was solved. Now, i need to make sure all requests to the root index page is rewritten, but i can't figure out the correct regex to find the 'homepage' / website root. this is what i have.... <if header="HTTP_HOST" match="^foo\.mydomain\.com\.au(?::\d+)?/?$"> <!-- snip some other rewrites, eg./buying/product -> ~/Pages/Foo/Bar.aspx --> <rewrite url="^/$" to="~/Pages/SomeWeirdFolder/Home.aspx" processing="stop"/> </if> Now if one of the rewrites were not found, then it falls through and continues. So .. can anyone please help?

    Read the article

  • Build a JavaScript wrapper for a rails-generated XML API?

    - by Thor Thurn
    I am working with a large website written in Ruby on Rails. Thanks to the support for REST in Rails 2, the site's business logic is all accessible via a consistent XML API. Now I want to be able to easily write one or more JavaScript frontends to the site that interact with the generated Rails XML API. Ideally, an automated wrapper for the API could be created in JavaScript, since this would minimize the effort required in writing XML processing code for the more than 500 API functions. How, then, can I automatically generate a wrapper around a given XML API in JavaScript so that it's more pleasant to work with? I've worked with solutions of this nature for Java that generate classes and methods to wrap an API, so my current thinking is that I want something of that nature for JavaScript. I'd be open to an alternative take on the problem, though.

    Read the article

  • How to open a large text file in C#

    - by desmati
    I have a text file that contains about 100000 articles. The structure of file is: BEGIN OF FILE .Document ID 42944-YEAR:5 .Date 03\08\11 .Cat political Article Content 1 .Document ID 42945-YEAR:5 .Date 03\08\11 .Cat political Article Content 2 END OF FILE I want to open this file in c# for processing it line by line. I tried this code: String[] FileLines = File.ReadAllText(TB_SourceFile.Text).Split(Environment.NewLine.ToCharArray()); But it says: Exception of type 'System.OutOfMemoryException' was thrown. The question is How can I open this file and read it line by line. File Size: 564 MB (591,886,626 bytes) File Encoding: UTF-8 File contains Unicode characters.

    Read the article

  • How do I create a downscaled copy of an FBO in OpenGL?

    - by Jasper Bekkers
    Hi, In order to speed up some post-processing shaders I'm using, I need to perform these operations on a framebuffer that is smaller in size than the actual window (about 1/4th or more). Most of the effects I want to optimize are simple blurring operations that could be replaced (for a large part) by smaller kernel and bilinear filtering. Thus, I need to create a copy of the current FBO into another one. However, I couldn't find anything, that works, on how to do this. I've tried using glBlitframebufferEXT and rendering a fullscreen quad into the other framebuffer, but both paths result in a black texture as output. How do I go about solving this problem?

    Read the article

  • ASP.NET MVC: post-redirect-get pattern, with only two overloaded action methods

    - by Rafi
    Is it possible to implement post-redirect-get pattern, with two overloaded action methods(One for GET action and the other for POST action) in ASP.NET MVC. In all of the MVC post-redirect-get pattern samples, I have seen three different action methods for the post-redirect-get process, each having different names. Is this really required? For Eg:(Does the code shown below, follows Post-Redirect-Get pattern?) public class SalaryTransferController : Controller { // // GET: /SalaryTransfer/ [HttpGet] public ActionResult Index(int id) { SalaryTransferIndexViewModel vm = new SalaryTransferIndexViewModel(id) { SelectedDivision = DivisionEnum.Contracting }; //Do some processing here return View(vm); } // // POST: /SalaryTransfer/ [HttpPost] public ActionResult Index(SalaryTransferIndexViewModel vm) { bool validationsuccess = false; //validate if (validationsuccess) return RedirectToAction("Index", new {id=1234 }); else return View(vm); } } Thank you for your responses.

    Read the article

  • Design Pattern for Server Emulator

    - by adisembiring
    I wanna build server socket emulator, but I want implement some design pattern there. I will described my case study that I have simplified like these: My Server Socket will always listen client socket. While some request message come from the client socket, the server emulator will response the client through the socket. the response is response code. '00' will describe request message processed successfully, and another response code expect '00' will describe there are some error while processing the message request. IN the server there are some UI, this UI contain check response parameter such as. response code timeout interval While the server want to response the client message, the response code taken from input parameter response form UI check the timeout interval, it will create sleep thread and the interval taken from timeout interval input from UI. I have implement the function, but I create it in one class. I feel it so sucks. Can you suggest me what class / interface that I must create to refactor my code.

    Read the article

  • How to generate makefile targets from variables?

    - by Ketil
    I currently have a makefile to process some data. The makefile gets the inputs to the data processing by sourcing a CONFIG file, which defines the input data in a variable. Currently, I symlink the input files to a local directory, i.e. the makefile contains: tmp/%.txt: tmp ln -fs $(shell echo $(INPUTS) | tr ' ' '\n' | grep $(patsubst tmp/%,%,$@)) $@ This is not terribly elegant, but appears to work. Is there a better way? Basically, given INPUTS = /foo/bar.txt /zot/snarf.txt I would like to be able to have e.g. %.out: %.txt some command As well as targets to merge results depending on all $(INPUT) files. Also, apart from the kludgosity, the makefile doesn't work correctly with -j, something that is crucial for the analysis to complete in reasonable time. I guess that's a bug in GNU make, but any hints welcome.

    Read the article

  • Processes sharing cores on Ubuntu system

    - by muckabout
    My coworkers and I share an 8-core server running Ubuntu for our batch processes. I tend to run 4 processes at a time, each of which consumes 100% CPU per core when nothing else is running. When a coworker runs his processes (typically about 4 at a time), his also get 100% per. However, when both of us run ours (he always goes first), his still get 100% and mine seem to divide the remaining processing power and linger in the 10-40% range. I even reniced his process to a lower value and it did not change. What are the issues that may cause this?

    Read the article

  • postgres store with composite value type, or a better way of attributing an inverted index

    - by Hassan Syed
    can't seem to figure out the syntax for populating a hstore with a value of composite type -- note: I do not want to convert a record to a hstore. select hstore('hello => ROW(1,2)'); I know it's something simple; However, google is not my friend today. use case : custom inverted index. The data is modelling an inverted index of lexemes, the composite data types are various probabilities related to the lexemes which I will use to implement document clustering. Does anyone know a better way of doing this ? I'm open to using an external system if it allows attaching attributes to key-posting pairs in the inverted index. I'd use something external if it had solid support for what I am trying to do, I suspect that sticking 3-10k lexemes per tuple and then doing batch processing on them is gonna be nasty as the whole hstore will have to be parsed and converted .

    Read the article

  • Javascript: replacing newlines with <br/> working in FF and SAFARI and not working in IE

    - by Daniel
    I was thinking that replacing \n with with javascript was quite a simple task, but it seems not to be so. Posts in Ask Ben or StackOverflow suggest that something as simple as: dst= dst.replace (/\n/g, "<br/>"); $("div.descr").html(dst); will get the job done. Indeed, this work in FF and Safari but not in IE. Text has been created in a textarea and then stored in a database, then retrieved without further processing. It works using FF on windows and Safari on Mac. IE on windows, nada. Is it a major bug in my head? Is it a JQuery issue? Have some idea about how to solve this? And possible reason for? Many thanks

    Read the article

  • Any tips of how to handle hierarchical trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >