Search Results

Search found 10417 results on 417 pages for 'large'.

Page 354/417 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • How can I keep the the logic to translate a ViewModel's values to a Where clause to apply to a linq query out of My Controller?

    - by Mr. Manager
    This same problem keeps cropping up. I have a viewModel that doesn't have any persistent backing. It is just a ViewModel to generate a search input form. I want to build a large where clause from the values the user entered. If the Action Accepts as a parameter SearchViewModel How do I do this without passing my viewModel to my service layer? Service shouldn't know about ViewModels right? Oh and if I serialize it, then it would be a big string and the key/values would be strongly typed. SearchViewModel this is just a snippet. [Display(Name="Address")] public string AddressKeywords { get; set; } /// <summary> /// Gets or sets the census. /// </summary> public string Census { get; set; } /// <summary> /// Gets or sets the lot block sub. /// </summary> public string LotBlockSub { get; set; } /// <summary> /// Gets or sets the owner keywords. /// </summary> [Display(Name="Owner")] public string OwnerKeywords { get; set; } In my controller action I was thinking of something like this. but I would think all this logic doesn't belong in my Controller. ActionResult GetSearchResults(SearchViewModel model){ var query = service.GetAllParcels(); if(model.Census != null){ query = query.Where(x=>x.Census == model.Census); } if (model.OwnerKeywords != null){ query = query.Where(x=>x.Owners == model.OwnerKeywords); } return View(query.ToList()); }

    Read the article

  • Provider not notified from cookbook_file

    - by wittyhandle
    I'm working on an ssl provider using Vagrant (1.0.5) and chef-solo (10.12.0) I have my provider, called ssl within a cookbook called gtm_cq, I define it as such in my cookbook's default recipe: gtm_cq_ssl "author" do # attributes will come later end I then have my cookbook_file like below that should notify my ssl provider's import action once it pushes the cert up to the server: cookbook_file "#{node[:cq][:ssl][:author_cert_location]}/foo.cer" do source "foo.cer" owner "crx" group "root" mode "0644" notifies :import, resources(:gtm_cq_ssl => "author") end When I run this, the foo.cer gets pushed up as expected, but the import action of my ssl provider is never called. The most I see of any reference is these couple of lines in the log (removed log headers): .. cookbook_file[/opt/cq5/author/foo.cer] sending import action to gtm_cq_ssl[author] (delayed) .. Processing gtm_cq_ssl[author] action import (gtm_cq::author line 34) There's a large very obvious log statement as well as the use of another cookbook_file for a test file to push something up to the server. No log statement, no test file pushed. I'm certain too that the foo.cer file is removed from the server before each test. I found that if I edit my notifies line like so with :immediately notifies :import, resources(:gtm_cq_ssl => "author"), :immediately It seems to work. And I suppose this is ok in my particular case, but it would seem something is not right if that's the only way I can call my provider. Any help on this would be greatly appreciated. Thanks!

    Read the article

  • CURL request incomplete, suspect timeout but not sure.

    - by girlygeek
    I am currently using CURL via a php script running as daily cron to export product data in csv format from a site's admin area. The normal way of exporting data will be to go to the Export page in a browser, and set the configuration, then click on "export data" button. But as the number of products I am exporting is very large, and it takes more than 5-10 mins to export the data, I've decided to use php's curl function to mimic this on a daily basis via cron. Previously, it is working fine, but recently as I increased the number of products in the store by 500+, the script fails to return the exported data. Testing it manually via clicking on the "export" button in a browser, does return the data correctly. Thus there is no "timeout" issue with running the export in a browser manually. I've tested and by removing/decreasing the number of products (thus the time needed), the php-curl script works fine again when run from cron. So I suspect that it has something to do with timeouts issue, specifically with the curl function in php. I've set both CURLOPT_TIMEOUT and CURLOPT_CONNECTTIMEOUT to '0' respectively to try. In the php-curl script, I've also set "set_time_limit(3000)". But still it does not work, and the request will timeout, with the script failing to return with a complete set of csv data. Any help in helping me resolve/understand this issue will be much appreciated!

    Read the article

  • Why does mmap() fail with ENOMEM on a 1TB sparse file?

    - by metadaddy
    I've been working with large sparse files on openSUSE 11.2 x86_64. When I try to mmap() a 1TB sparse file, it fails with ENOMEM. I would have thought that the 64 bit address space would be adequate to map in a terabyte, but it seems not. Experimenting further, a 1GB file works fine, but a 2GB file (and anything bigger) fails. I'm guessing there might be a setting somewhere to tweak, but an extensive search turns up nothing. Here's some sample code that shows the problem - any clues? #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { char * filename = argv[1]; int fd; off_t size = 1UL << 40; // 30 == 1GB, 40 == 1TB fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, 0666); ftruncate(fd, size); printf("Created %ld byte sparse file\n", size); char * buffer = (char *)mmap(NULL, (size_t)size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if ( buffer == MAP_FAILED ) { perror("mmap"); exit(1); } printf("Done mmap - returned 0x0%lx\n", (unsigned long)buffer); strcpy( buffer, "cafebabe" ); printf("Wrote to start\n"); strcpy( buffer + (size - 9), "deadbeef" ); printf("Wrote to end\n"); if ( munmap(buffer, (size_t)size) < 0 ) { perror("munmap"); exit(1); } close(fd); return 0; }

    Read the article

  • multiple models in Rails with a shared interface

    - by dfondente
    I'm not sure of the best structure for a particular situation in Rails. We have several types of workshops. The administration of the workshops is the same regardless of workshop type, so the data for the workshops is in a single model. We collect feedback from participants about the workshops, and the questionnaire is different for each type of workshop. I want to access the feedback about the workshop from the workshop model, but the class of the associated model will depend on the type of workshop. If I was doing this in something other than Rails, I would set up an abstract class for WorkshopFeedback, and then have subclasses for each type of workshop: WorkshopFeedbackOne, WorkshopFeedbackTwo, WorkshopFeedbackThree. I'm unsure how to best handle this with Rails. I currently have: class Workshop < ActiveRecord::Base has_many :workshop_feedbacks end class Feedback < ActiveRecord::Base belongs_to :workshop has_many :feedback_ones has_many :feedback_twos has_many :feedback_threes end class FeedbackOne < ActiveRecord::Base belongs_to :feedback end class FeedbackTwo < ActiveRecord::Base belongs_to :feedback end class FeedbackThree < ActiveRecord::Base belongs_to :feedback end This doesn't seem like to the cleanest way to access the feedback from the workshop model, as accessing the correct feedback will require logic investigating the Workshop type and then choosing, for instance, @workshop.feedback.feedback_one. Is there a better way to handle this situation? Would it be better to use a polymorphic association for feedback? Or maybe using a Module or Mixin for the shared Feedback interface? Note: I am avoiding using Single Table Inheritance here because the FeedbackOne, FeedbackTwo, FeedbackThree models do not share much common data, so I would end up with a large sparsely populated table with STI.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • How can I initialize an ActiveX control from a URL?

    - by Peter Ruderman
    I have an MFC ActiveX control embedded in a web page. Some of the parameters for this control are very large. I don't know what these values will be at compile time, but I do know that once retrieved, they will almost certainly never change. Currently, I embed the parameters like so: <object name="MyActiveX"> <param name="param" value="<%= GetData() %>" /> </object> I want to do something like this: <object name="MyActiveX"> <param name="param" value="content/data" valuetype="ref" /> </object> The idea is that the browser would retrieve the resource from the web server and pass it on to the control. The browser's own caching would then take care of the unneccesary downloads. Unfortunately, ref parameters don't work like this. The browser just passes the url along to the control (which strikes me as utterly useless, but I digress). So, is there some way I can make this work? Alternatively, is there an easy way in MFC to instruct the control's host container to retrieve a URI identified resource? Any better ideas?

    Read the article

  • Populate on demand a TreeView with datas in an XML

    - by m6a-uds
    Hi I have a large XML file (3000+ nodes) that I want to represent in a TreeView on ASP.NET. I cannot databind it to a XMLDataSource because loading the TreeView will then be way too slow (I never even waited long enough to see it finish...) So the solution for this would be to use the PopulateOnDemand property of the TreeNodes to load data only when needed. Problem is, I can't think of a way to acheive this... How can-I, based on the ID of a node, search a XMLDocument to get all the childnodes of the node having this ID? XML would look like that: <document ID=1> <document ID=2> <document ID=3> </document> </document> <document ID=4> </document> </document> There are nor rules on how much levels it can go down or anything...

    Read the article

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

  • Intelligent search and generation of Java code, preferrably using Python?

    - by Ipsquiggle
    Basically, I do lots of one-off code generation, large-scale refactorings, etc. etc. in Java. My tool language of choice is Python, but I'll take whatever solutions you can offer. Here is a simplified illustration of what I would like, in a pseudocode Generating an implementation for an interface search within my project: for each Interface as iName: write class(name=iName+"Impl", implements=iName) search within the body of iName: for each Method as mName: write method(name=mName, body="// TODO implement this...") Basically, the tool I'm searching for would allow me to: parse files according to their Java structure ("search for interfaces") search for words contextualized by language elements and types ("variables of type SomeClass", "doStuff() method calls on SomeClass instances") to run searches with structural context ("within the body of the current result") easily replace or generate code (with helpers to generate, as above, or functions for replacing, "rename the interface to Foo", "insert the line Blah.Blah()", etc.) The point is, I don't want to spend a lot of time writing these things, as they are usually throwaway. But sometimes I need something just a little smarter than what grep offers. It wouldn't be too hard to write up a simplistic version of this, but if I'm going to use something like this at all, I'd expect it to be robust. Any suggestions of a tool/library that will help me accomplish this?

    Read the article

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • SharePoint Lists.asmx's UpdateListItems() returns too much data

    - by Philipp Schmid
    Is there a way to prevent the UpdateListItems() web service call in SharePoint's Lists.asmx endpoint from returning all of the fields of the newly created or updated list item? In our case an event handler attached to our custom list is adding some rather large field values which are turned to the client unnecessarily. Is there a way to tell it to only return the ID of the newly created (or updated) list item? For example, currently the web service returns something like this: <Results xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Result ID="1,Update"> <ErrorCode>0x00000000</ErrorCode> <z:row ows_ID="4" ows_Title="Title" ows_Modified="2003-06-19 20:31:21" ows_Created="2003-06-18 10:15:58" ows_Author="3;#User1_Display_Name" ows_Editor="7;#User2_Display_Name" ows_owshiddenversion="3" ows_Attachments="-1" ows__ModerationStatus="0" ows_LinkTitleNoMenu="Title" ows_LinkTitle="Title" ows_SelectTitle="4" ows_Order="400.000000000000" ows_GUID="{4962F024-BBA5-4A0B-9EC1-641B731ABFED}" ows_DateColumn="2003-09-04 00:00:00" ows_NumberColumn="791.00000000000000" xmlns:z="#RowsetSchema" /> </Result> ... </Results> where as I am looking for a trimmed response only containing for example the ows_ID attribute: <Results xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Result ID="1,Update"> <ErrorCode>0x00000000</ErrorCode> <z:row ows_ID="4" /> </Result> ... </Results> I have unsuccessfully looked for a resource that documents all of the valid attributes for both the <Batch> and <Method> tags in he updates XmlNode parameter of UpdateListItems() in the hope that I will find a way to specify the fields to return. A solution for WSS 3.0 would be preferable over an SP 2010-only solution.

    Read the article

  • Replacing certain words with links to definitions using Javascript

    - by adharris
    I am trying to create a glossary system which will get a list of common words and their definitions via ajax, then replace any occurrence of that word in certain elements (those with the useGlossary class) with a link to the full definition and provide a short definition on mouse hover. The way I am doing it works, but for large pages it takes 30-40 seconds, during which the page hangs. I would like to either decrease the time it takes to do the replacement or make it so that the replacement is running in the background without hanging the page. I am using jquery for most of the javascript, and Qtip for the mouse hover. Here is my existing slow code: $(document).ready(function () { $.get("fetchGlossary.cfm", null, glossCallback, "json"); }); function glossCallback(data) { $(".useGlossary").each(function() { var $this = $(this); for (var i in data) { $this.html($this.html().replace(new RegExp("\\b" + data[i].term + "\\b", "gi"), function(m) {return makeLink(m, data[i].def);})); } $this.find("a.glossary").qtip({ style: { name: 'blue', tip: true } }) }); } function makeLink(m, def) { return "<a class='glossary glossary" + m.replace(/\s/gi, "").toUpperCase() + "' href='reference/glossary.cfm' title='" + def + "'>" + m + "</a>"; } Thanks for any feedback/suggestions!

    Read the article

  • Periodically iterating over a collection that's constantly changing

    - by rwmnau
    I have a collection of objects that's constantly changing, and I want to display some information about objects (my application is multi-threaded, and differently threads are constantly submitting requests to modify an object in the collection, so it's unpredictable), and I want to display some information about what's currently in the collection. If I lock the collection, I can iterate over it and get my information without any problems - however, this causes problems with the other threads, since they could have submitted multiple requests to modify the collection in the meantime, and will be stalled. I've thought of a couple ways around this, and I'm looking for any advice. Make a copy of the collection and iterate over it, allowing the original to continue updating in the background. The collection can get large, so this isn't ideal, but it's safe. Iterate over it using a For...Next loop, and catch an IndexOutOfBounds exception if an item is removed from the collection while we're iterating. This may occasionally cause duplicates to appear in my snapshot, so it's not ideal either. Any other ideas? I'm only concerned about a moment-in-time snapshot, so I'm not concerned about reflecting changes in my application - my main concern is that the collection be able to be updated with minimal latency, and that updates never be lost.

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • How can I spot subtle Lisp syntax mistakes?

    - by Marius Andersen
    I'm a newbie playing around with Lisp (actually, Emacs Lisp). It's a lot of fun, except when I seem to run into the same syntax mistakes again and again. For instance, here's something I've encountered several times. I have some cond form, like (cond ((foo bar) (qux quux)) ((or corge (grault warg)) (fred) (t xyzzy))) and the default clause, which returns xyzzy, is never carried out, because it's actually nested inside the previous clause: (cond ((foo bar) (qux quux)) ((or corge (grault warg)) (fred)) (t xyzzy)) It's difficult for me to see such errors when the difference in indentation is only one space. Does this get easier with time? I also have problems when there's a large distance between the (mal-)indented line and the line it should be indented against. let forms with a lot of complex bindings, for example, or an unless form with a long conditional: (defun test () (unless (foo bar (qux quux) (or corge (grault warg) (fred)))) xyzzy) It turns out xyzzy was never inside the unless form at all: (defun test () (unless (foo bar (qux quux) (or corge (grault warg) (fred))) xyzzy)) I auto-indent habitually and use parenthesis highlighting to avoid counting parentheses. For the most part it works like a breeze, but occasionally, I discover my syntax mistakes only by debugging. What can I do?

    Read the article

  • Which web solution should I use for my project?

    - by BenIOs
    I'm going to create a fairly large (from my point of view anyway) web project with a friend. We will create a site with roads and other road related info. Our calculations is that we will have around 100k items in our database. Each item will contain some information like location, name etc. (about 30 thing each). We are counting on having a few hundred thousand unique visitors per month. The 100k items and their locations (that will be searchable) will be the main part of the page but we will also have some articles, comments, news and later on some more social functions (accounts, forums, picture uploads etc.). We were going to use Google AppEngine to develop our project since it is really scalable and free (at least for a while). But I'm actually starting to doubt that AppEngine is right for us. It seems to be for webbapps and not sites like ours. Which system (language/framework etc.) would you guys recommend us to use? It doesn't really mater if we know the language since before (we like learning new stuff) but it would be good if it's something that is future proof.

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

  • Building a J2EE dev/test setup on a single PC

    - by John
    It's been a while since I did Java work, and even then I was never responsible for starting a large project from the very start... there were test/staging/production systems already running, etc, etc. Now I am looking to start a J2EE project from scratch on my trusty workstation, which has never been used for Java development and runs Windows 7 64bit. First of all, I'll be getting Eclipse. As far as writing the code goes I'm pretty happy. And running it through Eclipse is OK, but what I'd really want is to have a VM running MySQL and TomCat on which I can properly deploy my project and run/debug it 'remotely' from my dev PC. And I guess this should be done using Ant instead of letting Eclipse build the WAR for me, so that I don't end up with a dependence on Eclipse. I'm certain Eclipse can do this, so you hit a button and it runs Ant scripts, deploys and debugs for instance, but very hazy on it. Are there any good guides on this? I don't want to be taught Java, or even Ant, but rather the 'glue' parts like getting my test VM up and running under Windows, getting a build/test/deploy/run pipeline running through Eclipse, etc. One point, I only plan to use Windows... hosting a Windows VM on my Windwos desktop. And while I can use command-line tools like ant/svn, I'm much more a GUI person who loves IDE integration... I'd rather this didn't end up an argument about Linux or Vi, etc! I am looking for free, but am a MAPS subscriber, and run Win7 Ultimate in case that makes a difference as far as free VM solutions.

    Read the article

  • Parallel doseq for Clojure

    - by andrew cooke
    I haven't used multithreading in Clojure at all so am unsure where to start. I have a doseq whose body can run in parallel. What I'd like is for there always to be 3 threads running (leaving 1 core free) that evaluate the body in parallel until the range is exhausted. There's no shared state, nothing complicated - the equivalent of Python's multiprocessing would be just fine. So something like: (dopar 3 [i (range 100)] ; repeated 100 times in 3 parallel threads... ...) Where should I start looking? Is there a command for this? A standard package? A good reference? So far I have found pmap, and could use that (how do I restrict to 3 at a time? looks like it uses 32 at a time - no, source says 2 + number of processors), but it seems like this is a basic primitive that should already exist somewhere. clarification: I really would like to control the number of threads. I have processes that are long-running and use a fair amount of memory, so creating a large number and hoping things work out OK isn't a good approach (example which uses a significant chunk available mem). update: Starting to write a macro that does this, and I need a semaphore (or a mutex, or an atom i can wait on). Do semaphores exist in Clojure? Or should I use a ThreadPoolExecutor? It seems odd to have to pull so much in from Java - I thought parallel programming in Clojure was supposed to be easy... Maybe I am thinking about this completely the wrong way? Hmmm. Agents?

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • Hashing 11 byte unique ID to 32 bits or less

    - by MoJo
    I am looking for a way to reduce a 11 byte unique ID to 32 bits or fewer. I am using an Atmel AVR microcontroller that has the ID number burned in at the factory, but because it has to be transmitted very often in a very low power system I want to reduce the length down to 4 bytes or fewer. The ID is guaranteed unique for every microcontroller. It is made up of data from the manufacturing process, basically the coordinates of the silicone on the wafer and the production line that was used. They look like this: 304A34393334-16-11001000 314832383431-0F-09000C00 Obviously the main danger is that by reducing these IDs they become non-unique. Unfortunately I don't have a large enough sample size to test how unique these numbers are. Having said that because there will only be tens of thousands of devices in use and there is secondary information that can be used to help identify them (such as their approximate location, known at the time of communication) collisions might not be too much of an issue if they are few and far between. Is something like MD5 suitable for this? My concern is that the data being hashed is very short, just 11 bytes. Do hash functions work reliably on such short data?

    Read the article

  • Determine mouse click for all screen resolutions

    - by Hallik
    I have some simple javascript that determines where a click happens within a browser here: var clickDoc = (document.documentElement != undefined && document.documentElement.clientHeight != 0) ? document.documentElement : document.body; var x = evt.clientX; var y = evt.clientY; var w = clickDoc.clientWidth != undefined ? clickDoc.clientWidth : window.innerWidth; var h = clickDoc.clientHeight != undefined ? clickDoc.clientHeight : window.innerHeight; var scrollx = window.pageXOffset == undefined ? clickDoc.scrollLeft : window.pageXOffset; var scrolly = window.pageYOffset == undefined ? clickDoc.scrollTop : window.pageYOffset; params = '&x=' + (x + scrollx) + '&y=' + (y + scrolly) + '&w=' + w + '&random=' + Date(); All of this data gets stored in a DB. Later I retrieve it and display where all the clicks happened on that page. This works fine if I do all my clicks in one resolution, and then display it back in the same resolution, but this not the case. there can be large amounts of resolutions used. In my test case I was clicking on the screen with a screen resolution of 1260x1080. I retrieved all the data and displayed it in the same resolution. But when I use a different monitor (tried 1024x768 and 1920x1080. The marks shift to the incorrect spot. My question is, if I am storing the width and height of the client, and the x/y position of the click. If 3 different users all with different screen resolutions click on the same word, and a 4th user goes to view where all of those clicks happened, how can I plot the x/y position correctly to show that everyone clicked in the same space, no matter the resolution? If this belongs in a better section, please let me know as well.

    Read the article

  • Python lists/arrays: disable negative indexing wrap-around

    - by wim
    While I find the negative number wraparound (i.e. A[-2] indexing the second-to-last element) extremely useful in many cases, there are often use cases I come across where it is more of an annoyance than helpful, and I find myself wishing for an alternate syntax to use when I would rather disable that particular behaviour. Here is a canned 2D example below, but I have had the same peeve a few times with other data structures and in other numbers of dimensions. import numpy as np A = np.random.randint(0, 2, (5, 10)) def foo(i, j, r=2): '''sum of neighbours within r steps of A[i,j]''' return A[i-r:i+r+1, j-r:j+r+1].sum() In the slice above I would rather that any negative number to the slice would be treated the same as None is, rather than wrapping to the other end of the array. Because of the wrapping, the otherwise nice implementation above gives incorrect results at boundary conditions and requires some sort of patch like: def ugly_foo(i, j, r=2): def thing(n): return None if n < 0 else n return A[thing(i-r):i+r+1, thing(j-r):j+r+1].sum() I have also tried zero-padding the array or list, but it is still inelegant (requires adjusting the lookup locations indices accordingly) and inefficient (requires copying the array). Am I missing some standard trick or elegant solution for slicing like this? I noticed that python and numpy already handle the case where you specify too large a number nicely - that is, if the index is greater than the shape of the array it behaves the same as if it were None.

    Read the article

  • shell script segment to avoid overwriting files

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN`; isdone=`ls *.foo | grep $PATTERN`; whatsleft=todo - isdone; # what's the unix magic? #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >