Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 389/457 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • NHibernate - Retrieving Lots of Data Becomes Exponentially Slow

    - by nfplee
    Hi, I have an issue when I retrieve lots of data in NHibernate (such as when producing a report) the page becomes exponentially slower the more data it has to retrieve. I found the following article: http://nhforge.org/blogs/nhibernate/archive/2008/10/30/bulk-data-operations-with-nhibernate-s-stateless-sessions.aspx It explains how doing bulk data operations in NHibernate is slow since the first level cache grows too large and how you should use the IStatelessSession instead. The trouble I have is that I don't wish to tie my application to NHibernate so I've added a wrapper around ISession. I then use Linq as my query mechanism but IStatelessSession does not support Linq (it may do in NHibernate 3 but the Linq provider is not stable as it stands at the moment). I then read that you could do a clear after so many iterations to clear out the first level cache. The problem now is that you can't use lazy loading. The linq provider doesn't allow you to override the mapping defined (or eagerly fetch the additional data) so whenever I grab data which is lazy loaded after I have cleared the session an exception is thrown. I'm completely lost on what do now. I like the ease of producing reports with linq but the limitations of the inbuilt linq provider in NHibernate seem to be holding me back. I'd really appreciate it if someone could show me an alternative approach. Thanks

    Read the article

  • Resize an image and maintain quality?

    - by JasonS
    Hi, I have a problem with resizing images. What happens is that if you upload a file larger than the stated parameters, the image is cropped, then saved at 100% quality. So if I upload a large jpeg which is 272Kb. The image is cropped by 100 odd pixels. The file size then goes up to 1.2Mb. We are saving images at a 100% quality. I assume that this is what is causing the problem. The image is exported from Photoshop at 30% quality which reduces the file size. Resaving the image at 100% quality creates the same image but I assume with a lot of redundant file data. Has anyone encountered this before? Does anyone have a solution? This is what we are using. $source_im = imagecreatefromjpeg ($file); $dest_im = imagecreatetruecolor ($newsize_x, $newsize_y); imagecopyresampled ( $dest_im, $source_im, 0, 0, $offset_x, $offset_y, $newsize_x, $newsize_y, $sourceWidth, $sourceHeight ); imagedestroy ($source_im); if ($greyscale) { $dest_im = $this->imageconvertgreyscale ($dest_im); } imagejpeg($dest_im, $save_to_file, $quality); break;

    Read the article

  • In SQL can I return a tables with a varying number of columns

    - by Matt
    I have a somewhat more complicated scenario, but I think it should be possible. I have a large SPROC whose result is a set of characteristics for a set of persons. So the Table would look something like this: Property | Client1 Client 2 Client3 ----------------------------------------------------------- Sex | M F M Age | 67 56 67 Income | Low Mid Low It's built using cursors, iterating over different datasets. The problem I am facing is that there is a varying number of Clients and Properties, so an equally valid result over different input sets might be: Property | Client1 Client 2 ------------------------------------------- Sex | M F Age | 67 56 Weight | 122 122 The different number of properties is easy, those are just extra rows. My problem is that I need to declare a temporary table with a varying number of columns. There could be 2 clients or 100. Every client in guaranteed to have every property ultimately listed. What SQL structure would statisfy this and how can I declare it and insert things into it? I can't just flip the columns and rows either because there is a variable number of each.

    Read the article

  • Suggestions for a django db structure

    - by rh0dium
    Hi Say I have the unknown number of questions. For example: Is the sky blue [y/n] What date were your born on [date] What is pi [3.14] What is a large integ [100] Now each of these questions poses a different but very type specific answer (boolean, date, float, int). Natively django can happily deal with these in a model. class SkyModel(models.Model): question = models.CharField("Is the sky blue") answer = models.BooleanField(default=False) class BirthModel(models.Model): question = models.CharField("What date were your born on") answer = models.DateTimeField(default=today) class PiModel(models.Model) question = models.CharField("What is pi") answer = models.FloatField() But this has the obvious problem in that each question has a specific model - so if we need to add a question later I have to change the database. Yuck. So now I want to get fancy - How do a set up a model where by the answer type conversion happens automagically? ANSWER_TYPES = ( ('boolean', 'boolean'), ('date', 'date'), ('float', 'float'), ('int', 'int'), ('char', 'char'), ) class Questions(models.model): question = models.CharField(() answer = models.CharField() answer_type = models.CharField(choices = ANSWER_TYPES) default = models.CharField() So in theory this would do the following: When I build up my views I look at the type of answer and ensure that I only put in that value. But when I want to pull that answer back out it will return the data in the format specified by the answer_type. Example 3.14 comes back out as a float not as a str. How can I perform this sort of automagic transformation? Or can someone suggest a better way to do this? Thanks much!!

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • How do I make a HTTP POST request to an url from within a c++ dll?

    - by tjorriemorrie
    Hi, there's an open-source application I need to use. It allows you to use a custom dll. Unfortunately, I can't code c++; i don't like it; it don't want to learn it. I do know PHP very well however, thus you can see that I'll rather do my logic within a PHP application. Thus I'm thinking about posting the data from c++/dll to a url on my localhost. (i have my local server set up, that's not the problem). I need to post a large amount of variables (thus a POST and not GET request required). The return value will only be one (int)variable, either 0, 1 or 2. So I need a c++ function that: 1) will post variables to an url. 2) Wait for, and receive the answer. The data type can be in xml, soap, json, whatever, doesn't matter. Is there anyone that can write a little c++ http function for me? pretty please? ;)

    Read the article

  • Magento products will not show in category

    - by Aaron
    I've recently been tasked with the build and deployment of a large Ecommerce site. In the past we've had to use the clients legacy X-cart installation for redevelopment (too far integrated with their existing work flow). We'd heard good things about Magento, so I've set up a test install to get to grips with it. After a couple of initial issues, there is a live development site which displays categories on the default theme. The problem we've hit now is that products don't display..! After a lot more in-depth research into this, all I've been able to discover is that quite a number of developers endorse using other solutions entirely, with the other 50% saying after the steep learning curve the platform is as wonderful as we'd initially been led to believe. Now, my test category is showing, so I know this is configured properly. I've set up three test products and associated them with this (all done following the Magento user guide), checked double checked and thrice checked the products are enabled and visible individually, yet still the front end says the category has no products in it. I've cleared the cache repeatedly, reset everything possible many times in index management - no products show up. I have to make a call tomorrow morning on whether we're going ahead with Magento. If I can't even get it to show products I'm going to have to go with something with a more established track record and more community support available. Can anybody advise what could possibly be wrong here?

    Read the article

  • How do I check to see if a scalar has a compiled regex in it with Perl?

    - by Robert P
    Let's say I have a subroutine/method that a user can call to test some data that (as an example) might look like this: sub test_output { my ($self, $test) = @_; my $output = $self->long_process_to_get_data(); if ($output =~ /\Q$test/) { $self->assert_something(); } else { $self->do_something_else(); } } Normally, $test is a string, which we're looking for anywhere in the output. This was an interface put together to make calling it very easy. However, we've found that sometimes, a straight string is problematic - for example, a large, possibly varying number of spaces...a pattern, if you will. Thus, I'd like to let them pass in a regex as an option. I could just do: $output =~ $test if I could assume that it's always a regex, but ah, but the backwards compatibility! If they pass in a string, it still needs to test it like a raw string. So in that case, I'll need to test to see if $test is a regex. Is there any good facility for detecting whether or not a scalar has a compiled regex in it?

    Read the article

  • Deleting While Iterating in Ruby?

    - by Jesse J
    I'm iterating over a very large set of strings, which iterates over a smaller set of strings. Due to the size, this method takes a while to do, so to speed it up, I'm trying to delete one of the strings from the smaller set that no longer needs to be used. Below is my current code: Ms::Fasta.foreach(@database) do |entry| all.each do |set| if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete(set) end end end end The problem I face is that the easy way, array.delete(string), effectively adds a break statement to the inner loop, which messes up the results. The only way I know how to fix this is to do this: Ms::Fasta.foreach(@database) do |entry| i = 0 while i < all.length set = all[i] if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete_at(i) i -= 1 end end i += 1 end end This feels kind of sloppy to me. Is there a better way to do this? Thanks.

    Read the article

  • is this a design pattern?

    - by Michel
    Hi all, i have to build some financial data report, and for making the calculation, there are a lot of 'if then' situations: if it's a large client, subtract 10%, if it's postal code equals '10101', add 10%, if the day is on a saturday, make a difficult calculation etc. so i once read about this kind of example, and what they did was (hope i remember well) create a class with some base info and make it possible to add all kinds of calculationobjects to it. So to put what i remembered in pseudo code Basecalc bc = new baseCalc(); //put the info in the bc so other objects can do their if bc.Add(new Largecustomercalc()); bc.Add(new PostalcodeCalc()); bc.add(new WeekdayCalc()); the the bc would run the Calc() methods of all of the added Calc objects. As i type this i think all the Calc objects must be able to see the Basecalc properties to correctly perform their calculation logic. So all the if's are in the different Calc objects and not ALL in the Basecalc. does this make sense? I was wondering if this is some kind of design pattern?

    Read the article

  • What runs faster? Wordpress or Drupal 6.x?

    - by electblake
    So... I run a pretty large Wordpress blog. Currently it gets around 20k+ pageviews a day, and its always a struggle to keep the bad boy running quickly - I currently run a vps.net with CentOS 5.3 I am also Drupal developer by trade so I love the CMS Framework for its versatility and the portability (I can take work from one site and implement on another with great ease) MY QUESTION IS: What is faster then? Wordpress 3.x & Drupal 6.x I'd love to migrate my site to Drupal to be able to roll out new features etc (which I find awkward to do in Wordpress) but I am scared that Drupal may not be able to handle the traffic. Any opinions? I know that some major players use Drupal - as Dries documents well on his blog but I'm not under any illusions that Drupal can be a real hog. Thanks for any/all help! Please try to avoid server optimization talk unless it pertains to Wordpress or Drupal 6.x specifically, I love to learn more about optimizations but I do want to sort out which platform is quicker :) p.s - I realize the fastest option is to use a lower-level framework (with less overhead) like CakePHP etc but assume that isn't an option ;)

    Read the article

  • Online job-searching is tedious. Help me automate it.

    - by ehsanul
    Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience. I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions. In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved. Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one. I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?

    Read the article

  • identifier ... is undefined when trying to run pure C code in Cuda using nvcc

    - by Lostsoul
    I'm new and learning Cuda. A approach that I'm trying to use to learn is to write code in C and once I know its working start converting it to Cuda since I read that nvcc compiles Cuda code but complies everything else using plain old c. My code works in c(using gcc) but when I try to compile it using nvcc(after changing the file name from main.c to main.cu) I get main.cu(155): error: identifier "num_of_rows" is undefined main.cu(155): error: identifier "num_items_in_row" is undefined 2 errors detected in the compilation of "/tmp/tmpxft_00002898_00000000-4_main.cpp1.ii". Basically in my main method I send data to a function like this: process_list(count, countListItem, list); the first two items are ints and the last item(list) is a matrix. Then I create my function like this: void process_list(int num_of_rows, int num_items_in_row, int current_list[num_of_rows][num_items_in_row]) { This line is where I get my errors when using nvcc(line 155). I need to convert this code to cuda anyway so no need to troubleshoot this specific issue(plus code is quite large) but I'm wondering if I'm wrong about nvcc treating the C part of your code like plain C. In the book cuda by example I just used nvcc to compile but do I need any extra flags when just using pure c?

    Read the article

  • How do I Order on common attribute of two models in the DB?

    - by Will
    If i have two tables Books, CDs with corresponding models. I want to display to the user a list of books and CDs. I also want to be able to sort this list on common attributes (release date, genre, price, etc.). I also have basic filtering on the common attributes. The list will be large so I will be using pagination in manage the load. items = [] items << CD.all(:limit => 20, :page => params[:page], :order => "genre ASC") items << Book.all(:limit => 20, :page => params[:page], :order => "genre ASC") re_sort(items,"genre ASC") Right now I am doing two queries concatenating them and then sorting them. This is very inefficient. Also this breaks down when I use paging and filtering. If I am on page 2 of how do I know what page of each table individual table I am really on? There is no way to determine this information without getting all items from each table. I have though that if I create a new Class called items that has a one to one relationship with either a Book or CD and do something like Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "genre ASC") However this gives back an ambiguous error. So can only be refined as Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "books.genre ASC") And does not interleave the books and CDs in a way that I want. Any suggestions.

    Read the article

  • Rebuilding old (2010) django project in 2012

    - by birgit
    I am trying to make an old Django project run again. After seemingly having solved issues with old sorl.thumbnail versions and deprecated expressions I now get this error when running python manage.py runserver I also tried to copy & paste my old files into a new Django project and get the exactly same error. Maybe someone here has a clue where the problem lies? Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x2a80510>> Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 88, in inner_run self.validate(display_num_errors=True) File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "/usr/lib/python2.7/dist-packages/django/core/management/validation.py", line 35, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 146, in get_app_errors self._populate() File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 61, in _populate self.load_app(app_name, True) File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 78, in load_app models = import_module('.models', app_name) File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 73, in <module> class Image(models.Model): File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 83, in Image 'large': {'size': (640, 640)}, File "/usr/lib/python2.7/dist-packages/django/db/models/fields/files.py", line 233, in __init__ super(FileField, self).__init__(verbose_name, name, **kwargs) TypeError: __init__() got an unexpected keyword argument 'extra_thumbnails' I need to re-build the project just for visual documentation locally... so also any hints on how to quickly re-run outdated django-projects are very welcome!! Thanks a lot (using Ubuntu 12.04)

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • using usort to re-arrange multi-dimension array

    - by Thomas Bennett
    I have a large array: [0] => stdClass Object ( [products] => Array ( [0] => stdClass Object ( [body_html] => bodyhtml [published_at] => 2012-12-16T23:59:18+00:00 [created_at] => 2012-12-16T11:30:24+00:00 [updated_at] => 2012-12-18T10:52:14+00:00 [vendor] => Name [product_type] => type ) [1] => stdClass Object ( [body_html] => bodyhtml [published_at] => 2012-12-16T23:59:18+00:00 [created_at] => 2012-12-16T10:30:24+00:00 [updated_at] => 2012-12-18T10:52:14+00:00 [vendor] => Name [product_type] => type ) ) ) and I need to arrange each of the products by the date they where created... I've tried and failed all kinds of usort, ksort, uksort techniques to try and get it to be in a specific order (chronological) but failed! any guidance would be most appreciated

    Read the article

  • Chrome renders button links completely screwed up when placed inside a paragraph

    - by Ferdy
    I am fairly proficient in CSS but now I am running into a very strange rendering issue in Google Chrome 9. I am trying to create some fancy looking link buttons (basically heavily styled anchors). Here is some example markup: <a href="" class="button"> <figure class="sprite icon icon_back"></figure> Link button with icon</a> This markup may look a litte strange to you, there's a few things you should know: I am using HTML5's figure class to include an icon as part of the button. I have the proper reset CSS applied and Chrome can render this tag for sure. Instead of actually pointing to an image I am applying CSS classes to the figure element. Within the CSS I am using the spriting technique to show the correct portion of a single large sprite image. All of this is working fine in Firefox, and actually also in Chrome. The correct rendering can be seen in the following image: It renders like that in both Firefox and Chrome. Here comes the problem, if I place such a button within paragraph tags <p></p> this is what happens in Chrome only: Notice how the button is ripped apart? Only in Chrome and only when placed inside a paragraph. It gets even stranger: this only happens for the first button inside the paragraph, if I would place three buttons inside a paragraph, only the 1st one is screwed up. Your first question would probably be about the CSS. It is rather verbose so hereby a temporary link to the page in question: Edit: link to live page removed, was only temporary for problem inspection.

    Read the article

  • How to emulate mod_rewrite in PHP

    - by Tyler Crompton
    I have a few URLs that I want to map to certain files via PHP. Currently, I am just using mod_rewrite in Apache. However, my application is getting too large for the rewriting to be done with regular expressions. So I created a file router.php that does the rewriting. I understand to do a redirect I could just send the Location: header. However, I don't always want to do a redirect. For example, I may want /api/item/ to map to the file /herp/derp.php relative to the document root. I need to preserve the HTTP method as well. "No problem," I thought. I made my .htaccess have the following snippet. RewriteEngine On RewriteRule ^api/item/$ /cgi-bin/router.php [L] And my router.php file looks as follows: <?php $uri = parse_url($_SERVER['REQUEST_URI']); $query = isset($uri['query']) ? $uri['query'] ? array(); // some code that modifies the query require_once "{$_SERVER['DOCUMENT_ROOT']}/herp/derp.php?" . http_build_query($query); ?> However, this doesn't work, because the OS is looking for a file named derp.php?some=query. How can I simulate a rewrite rule such as RewriteRule ^api/item/$ /herp/derp/ [L] in PHP. In other words, how do I tell the server to process a different URL than requested and preserve the query and HTTP method without causing a redirect? Note: Using variables set in router.php is less than desirable and is bad structure since it's only supposed to be responsible for handling URLs. I am open to using a light-weight third party solution.

    Read the article

  • Implementing a bitfield using java enums

    - by soappatrol
    Hello, I maintain a large document archive and I often use bit fields to record the status of my documents during processing or when validating them. My legacy code simply uses static int constants such as: static int DOCUMENT_STATUS_NO_STATE = 0 static int DOCUMENT_STATUS_OK = 1 static int DOCUMENT_STATUS_NO_TIF_FILE = 2 static int DOCUMENT_STATUS_NO_PDF_FILE = 4 This makes it pretty easy to indicate the state a document is in, by setting the appropriate flags. For example: status = DOCUMENT_STATUS_NO_TIF_FILE | DOCUMENT_STATUS_NO_PDF_FILE; Since the approach of using static constants is bad practice and because I would like to improve the code, I was looking to use Enums to achieve the same. There are a few requirements, one of them being the need to save the status into a database as a numeric type. So there is a need to transform the enumeration constants to a numeric value. Below is my first approach and I wonder if this is the correct way to go about this? class DocumentStatus{ public enum StatusFlag { DOCUMENT_STATUS_NOT_DEFINED(1<<0), DOCUMENT_STATUS_OK(1<<1), DOCUMENT_STATUS_MISSING_TID_DIR(1<<2), DOCUMENT_STATUS_MISSING_TIF_FILE(1<<3), DOCUMENT_STATUS_MISSING_PDF_FILE(1<<4), DOCUMENT_STATUS_MISSING_OCR_FILE(1<<5), DOCUMENT_STATUS_PAGE_COUNT_TIF(1<<6), DOCUMENT_STATUS_PAGE_COUNT_PDF(1<<7), DOCUMENT_STATUS_UNAVAILABLE(1<<8), private final long statusFlagValue; StatusFlag(long statusFlagValue) { this.statusFlagValue = statusFlagValue } public long getStatusFlagValue(){ return statusFlagValue } } /** * Translates a numeric status code into a Set of StatusFlag enums * @param numeric statusValue * @return EnumSet representing a documents status */ public EnumSet<StatusFlag> getStatusFlags(long statusValue) { EnumSet statusFlags = EnumSet.noneOf(StatusFlag.class) StatusFlag.each { statusFlag -> long flagValue = statusFlag.statusFlagValue if ( (flagValue&statusValue ) == flagValue ) { statusFlags.add(statusFlag) } } return statusFlags } /** * Translates a set of StatusFlag enums into a numeric status code * @param Set if statusFlags * @return numeric representation of the document status */ public long getStatusValue(Set<StatusFlag> flags) { long value=0 flags.each { statusFlag -> value|=statusFlag.getStatusFlagValue() } return value } public static void main(String[] args) { DocumentStatus ds = new DocumentStatus(); Set statusFlags = EnumSet.of( StatusFlag.DOCUMENT_STATUS_OK, StatusFlag.DOCUMENT_STATUS_UNAVAILABLE) assert ds.getStatusValue( statusFlags )==258 // 0000.0001|0000.0010 long numericStatusCode = 56 statusFlags = ds.getStatusFlags(numericStatusCode) assert !statusFlags.contains(StatusFlag.DOCUMENT_STATUS_OK) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_TIF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_PDF_FILE) assert statusFlags.contains(StatusFlag.DOCUMENT_STATUS_MISSING_OCR_FILE) } }

    Read the article

  • How to determine the width of content or size a container to content

    - by ISVK
    My goal is to display the entire contents of a FlowDocument (that is, without paginating) in a column layout. It would have a fixed height, but the width would depend on the contents of the FlowDocument. My problem is: FlowDocumentReader does not automatically resize to the contents of the FlowDocument. As you see in my XAML below, FlowDocumentReader.Width is 5000 units (just a large number that can accommodate most documents) -- when I make it Auto, it just clips to the width of the ScrollViewer and paginates my stuff! Is there a proper way of solving this problem? I also made a screenshot of what this looks like now, but the ScrollViewer scrolls past the end of the document in most cases: http://i.imgur.com/3FSRl.png <ScrollViewer x:Name="scrollViewer" HorizontalScrollBarVisibility="Visible" VerticalScrollBarVisibility="Disabled" > <FlowDocumentReader x:Name="flowDocReader" ClipToBounds="False" Width="5000" > <FlowDocument x:Name="flowDoc" Foreground="#FF404040" ColumnRuleWidth="2" ColumnGap="40" ColumnRuleBrush="#FF404040" IsHyphenationEnabled="True" IsOptimalParagraphEnabled="True" ColumnWidth="150"> <Paragraph> Lorem ipsum dolor sit amet, ...etc... </Paragraph> <Paragraph> Lorem ipsum dolor sit amet, ...etc... </Paragraph> <Paragraph> Lorem ipsum dolor sit amet, ...etc... </Paragraph> </FlowDocument> </FlowDocumentReader> </ScrollViewer>

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >