Search Results

Search found 265 results on 11 pages for 'sane'.

Page 9/11 | < Previous Page | 5 6 7 8 9 10 11  | Next Page >

  • RESTful WCF Data Service Authentication

    - by Adrian Grigore
    Hi, I'd like to implement a REST api to an existing ASP.NET MVC website. I've managed to set up WCF Data services so that I can browse my data, but now the question is how to handle authentication. Right now the data service is secured via the site's built in forms authentication, and that's ok when accessing the service from AJAX forms. However, It's not ideal for a RESTful api. What I would like as an alternative to forms authentication is for the users to simply embed the user name and password into the url to the web service or as request parameters. For example, if my web service is usually accessible as http://localhost:1234/api.svc I'd like to be able to access it using the url http://localhost:1234/api.svc/{login}/{password} So, my questions are as follows: Is this a sane approach? If yes, how can I implement this? It seems trivial redirecting GET requests so that the login and password are attached as GET parameters. I also know how to inspect the http context and use those parameters to filter the results. But I am not sure if / how the same approach could be applied to POST, PUT and DELETE requests. Thanks, Adrian

    Read the article

  • Compile MonoDevelop 4.2.3

    - by user2942643
    I need help, I'm trying to compile monodevelop code, but when I use the command "./configure" tells me that I need to have installed a version of mono, but I have it installed [raven@localhost ~]$ mono -V Mono JIT compiler version 3.2.8 (tarball Fri May 30 08:15:47 CDT 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen [raven@localhost ~]$ cd /home/raven/Downloads/monodevelop-4.2.3 [raven@localhost monodevelop-4.2.3]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking whether to enable maintainer-specific portions of Makefiles... no checking for mono... /usr/local/bin/mono checking for gmcs... /usr/local/bin/gmcs checking for pkg-config... /usr/bin/pkg-config configure: error: You need mono 3.0.4 or newer [raven@localhost monodevelop-4.2.3]$

    Read the article

  • MessageSecurityException: The security header element 'Timestamp' with the '' id must be signed

    - by NiklasN
    I'm asking the same question here that I've already asked on msdn forums http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/70f40a4c-8399-4629-9bfc-146524334daf I'm consuming a (most likely Java based) Web Service with I have absolutely no access to modify. It won't be modified even though I would ask them (it's a nation wide system). I've written the client with WCF. Here's some code: CustomBinding binding = new CustomBinding(); AsymmetricSecurityBindingElement element = SecurityBindingElement.CreateMutualCertificateDuplexBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); element.AllowSerializedSigningTokenOnReply = true; element.SetKeyDerivation(false); element.IncludeTimestamp = true; element.KeyEntropyMode = SecurityKeyEntropyMode.ClientEntropy; element.MessageProtectionOrder = System.ServiceModel.Security.MessageProtectionOrder.SignBeforeEncrypt; element.LocalClientSettings.IdentityVerifier = new CustomIdentityVerifier(); element.SecurityHeaderLayout = SecurityHeaderLayout.Lax; element.IncludeTimestamp = false; binding.Elements.Add(element); binding.Elements.Add(new TextMessageEncodingBindingElement(MessageVersion.Soap11, Encoding.UTF8)); binding.Elements.Add(new HttpsTransportBindingElement()); EndpointAddress address = new EndpointAddress(new Uri("url")); ChannelFactory<MyPortTypeChannel> factory = new ChannelFactory<MyPortTypeChannel>(binding, address); ClientCredentials credentials = factory.Endpoint.Behaviors.Find<ClientCredentials>(); credentials.ClientCertificate.Certificate = myClientCert; credentials.ServiceCertificate.DefaultCertificate = myServiceCert; credentials.ServiceCertificate.Authentication.CertificateValidationMode = X509CertificateValidationMode.None; service = factory.CreateChannel(); After this every request done to the service fails in client side (I can confirm my request is accepted by the service and a sane response is being returned) I always get the following exception MessageSecurityException: The security header element 'Timestamp' with the '' id must be signed. By looking at trace I can see that in the response there really is a timestamp element, but in the security section there is only a signature for body. Can I somehow make WCF to ingore the fact Timestamp isn't signed?

    Read the article

  • How should I implement reverse AJAX in a Django application?

    - by Carson Myers
    How should I implement reverse AJAX when building a chat application in Django? I've looked at Django-Orbited, and from my understanding, this puts a comet server in front of the HTTP server. This seems fine if I'm just running the Django development server, but how does this work when I start running the application from mod_wsgi? How does having the orbited server handling every request scale? Is this the correct approach? I've looked at another approach (long polling) that seems like it would work, although I'm not sure what all would be involved. Would the client request a page that would live in its own thread, so as not to block the rest of the application? Would it even block? Wouldn't the script requested by the client have to continuously poll for information? Which of the approaches is more proper? Which is more portable, scalable, sane, etc? Are there other good approaches to this (aside from the client polling for messages) that I have overlooked?

    Read the article

  • What is the right way to make a new XMLHttpRequest from an RJS response in Ruby on Rails?

    - by Yuri Baranov
    I'm trying to come closer to a solution for the problem of my previous question. The scheme I would like to try is following: User requests an action from RoR controller. Action makes some database queries, makes some calculations, sets some session variable(s) and returns some RJS code as the response. This code could either update a progress bar and make another ajax request. display the final result (e.g. a chart grahic) if all the processing is finished The browser evaluates the javascript representation of the RJS. It may make another (recursive? Is recursion allowed at all?) request, or just display the result for the user. So, my question this time is: how can I embed a XMLHttpRequest call into rjs code properly? Some things I'd like to know are: Should I create a new thread to avoid stack overflow. What rails helpers (if any) should I use? Have anybody ever done something similar before on Rails or with other frameworks? Is my idea sane?

    Read the article

  • Purge complete Python installation on OS X

    - by Konrad Rudolph
    I’m working on a recently-upgraded OS X Snow Leopard and MacPorts and I’m running into problems at every corner. The first problem is the sheer number of installed Python versions: altogether, there are four: 2.5, 2.6 and 3.0 in /Library/Frameworks/Python.framework 2.6 in /opt/local/Library/Frameworks/Python.framework/ (MacPorts installation) So there are at least two useless/redundant versions: 2.5 and the redundant 2.6. Additionally, the pre-installed Python is giving me severe problems because some of the pre-installed libraries (in particular, scipy, numpy and matplotlib) don’t work properly. I am sorely tempted to purge the complete /Library/Frameworks/Python.framework path, as well as the MacPorts Python installation. After that, I’ll start from a clean slate by installing a properly configured Python, e.g. that from Enthought. Am I running headlong into trouble? Or is this a sane undertaking? (In particular, I need a working Python in the next few days and if I end up with a non-working Python this would be a catastrophe of medium proportions. On the other hand, some features I need from matplotlib aren’t working now.)

    Read the article

  • Is there a library that can decompile a method into an Expression tree, with support for CLR 4.0?

    - by Daniel Earwicker
    Previous questions have asked if it is possible to turn compiled delegates into expression trees, for example: http://stackoverflow.com/questions/767733/converting-a-net-funct-to-a-net-expressionfunct The sane answers at the time were: It's possible, but very hard and there's no standard library solution. Use Reflector! But fortunately there are some greatly-insane/insanely-great people out there who like reverse engineering things, and they make difficult things easy for the rest of us. Clearly it is possible to decompile IL to C#, as Reflector does it, and so you could in principle instead target CLR 4.0 expression trees with support for all statement types. This is interesting because it wouldn't matter if the compiler's built-in special support for Expression<> lambdas is never extended to support building statement expression trees in the compiler. A library solution could fill the gap. We would then have a high-level starting point for writing aspect-like manipulations of code without having to mess with raw IL. As noted in the answers to the above linked question, there are some promising signs but I haven't succeeded in finding if there's been much progress since by searching. So has anyone finished this job, or got very far with it? Note: CLR 4.0 is now released. Time for another look-see.

    Read the article

  • Alternative to Galileo GWS

    - by Anton Gogolev
    Please note that this Galileo is absolutely not related to Java. Galileo is basically a set of web services which can be used to book airline tickets. Originally, it was supposed to be used via Galileo Desktop, whereby operators would enter various commands to perform required operations. For example, SA*AZ610J20JULFCOJFK will "Display seat availability map for specified flight and class". Granted, humans can get used to it and be very efficient, but here comes a problem of integrating this with other systems. For that, folks at TravelPort basically slapped a SOAP interface to this system (which must have been written in COBOL or something), without even thinking about actually embracing XML. For example, it can contain <Ind1>N</Ind1> <Ind2>N</Ind2> <Ind3>N</Ind3> ... <Ind72>N</Ind72> <!-- Yes! 72! --> or, better yet <Text>P/RU/4xxx24528/RU/11MAY67/M/23DEC12/AxxxxxxV/MxxxM</Text> In light of this, my question is as follows: are there any sane airline tickets booking systems we can integrate with? Or are there companies which have products that can abstract away all this?

    Read the article

  • Can a standalone ruby script (windows and mac) reload and restart itself?

    - by user30997
    I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart. That's where I hit a roadblock. If I were using any sane platform, I could just do: exec('ruby', __FILE__) ...and be done. However, I did the following test: p Process.pid sleep 1 exec('ruby', __FILE__) ...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along. The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall. So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • what libraries or platforms should I use to build web apps that provide real-time, asynchronous data

    - by Daniel Sterling
    This is a less a question with a simple, practical answer and more a question to foster discussion on the real-time data exchange topic. I'll begin with an example: Google Wave is, at its core, a real-time asynchronous data synchronization engine. Wave supports (or plans to support) concurrent (real-time) document collaboration, disconnected (offline) document editing, conflict resolution, document history and playback with attribution, and server federation. A core part of Wave is the Operational Transformation engine: http://www.waveprotocol.org/whitepapers/operational-transform The OT engine manages document state. Changes between clients are merged and each client has a sane and consistent view of the document at all times; the final document is eventually consistent between all connected clients. My question is: is this system abstract or general enough to be used as a library or generic framework upon which to build web apps that synchronize real-time, asynchronous state in each client? Is the Wave protocol directly used by any current web applications (besides Google's client)? Would it make sense to directly use it for generic state synchronization in a web app? What other existing libraries or frameworks would you consider using when building such a web app? How much code in such an app might be domain-specific logic vs generic state synchronization logic? Or, put another way, how leaky might the state synchronization abstractions be? Comments and discussion welcomed!

    Read the article

  • Fastest way to modify a decimal-keyed table in MySQL?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Put together tiles in android sdk and use as background

    - by Jon
    In a feeble attempt to learn some Android development am I stuck at graphics. My aim here is pretty simple: Take n small images and build a random image, larger than the screen with possibility to scroll around. Have an animated object move around on it I have looked at the SDK examples, Lunar Lander especially but there are a few things I utterly fail to wrap my head around. I've got a birds view plan (which in my head seems reasonably sane): How do I merge the tiles into one large image? The background is static so I figure I should do like this: Make a 2d array with refs to the tiles Make a large Drawable and draw the tiles on it At init draw this big image as the background At each onDraw redraw the background of the previous spot of the moving object, and the moving object at its new location The problem is the hands on things. I load the small images with "Bitmap img1 = BitmapFactory.decodeResource (res, R.drawable.img1)", but then what? Should I make a canvas and draw the images on it with "canvas.drawBitmap (img1, x, y, null);"? If so how to get a Drawable/Bitmap from that? I'm totally lost here, and would really appreciate some hands on help (I would of course be grateful for general hints as well, but I'm primarily trying to understand the Graphics objects). To make you, dear reader, see my level of confusion will I add my last desperate try: Drawable drawable; Canvas canvas = new Canvas (); Bitmap img1 = BitmapFactory.decodeResource (res, R.drawable.img1); // 50 x 100 px image Bitmap img2 = BitmapFactory.decodeResource (res, R.drawable.img2); // 50 x 100 px image canvas.drawBitmap (img1, 0, 0, null); canvas.drawBitmap (img2, 50, 0, null); drawable.draw (canvas); // obviously wrong as draw == null this.setBackground (drawable); Thanks in advance

    Read the article

  • Fastest way to convert a list of doubles to a unique list of integers?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Simplest PHP Routing framework .. ?

    - by David
    I'm looking for the simplest implementation of a routing framework in PHP, in a typical PHP environment (Running on Apache, or maybe nginx) .. It's the implementation itself I'm mostly interested in, and how you'd accomplish it. I'm thinking it should handle URL's, with the minimal rewriting possible, (is it really a good idea, to have the same entrypoint for all dynamic requests?!), and it should not mess with the querystring, so I should still be able to fetch GET params with $_GET['var'] as you'd usually do.. So far I have only come across .htaccess solutions that puts everything through an index.php, which is sort of okay. Not sure if there are other ways of doing it. How would you "attach" what URL's fit to what controllers, and the relation between them? I've seen different styles. One huge array, with regular expressions and other stuff to contain the mapping. The one I think I like the best is where each controller declares what map it has, and thereby, you won't have one huge "global" map, but a lot of small ones, each neatly separated. So you'd have something like: class Root { public $map = array( 'startpage' => 'ControllerStartPage' ); } class ControllerStartPage { public $map = array( 'welcome' => 'WelcomeControllerPage' ); } // Etc ... Where: 'http://myapp/' // maps to the Root class 'http://myapp/startpage' // maps to the ControllerStartPage class 'http://myapp/startpage/welcome' // maps to the WelcomeControllerPage class 'http://myapp/startpage/?hello=world' // Should of course have $_GET['hello'] == 'world' What do you think? Do you use anything yourself, or have any ideas? I'm not interested in huge frameworks already solving this problem, but the smallest possible implementation you could think of. I'm having a hard time coming up with a solution satisfying enough, for my own taste. There must be something pleasing out there that handles a sane bootstrapping process of a PHP application without trying to pull a big magic hat over your head, and force you to use "their way", or the highway! ;)

    Read the article

  • Runing bcdedit from python in Windows 2008 SP2

    - by Lee-Man
    I do not know windows well, so that may explain my dilemma ... I am trying to run bcdedit in Windows 2008R2 from Python 2.6. My Python routine to run a command looks like this: def run_program(cmd_str): """Run the specified command, returning its output as an array of lines""" dprint("run_program(%s): entering" % cmd_str) cmd_args = cmd_str.split() subproc = subprocess.Popen(cmd_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) (outf, errf) = (subproc.stdout, subproc.stderr) olines = outf.readlines() elines = errf.readlines() if Options.debug: if elines: dprint('Error output:') for line in elines: dprint(line.rstrip()) if olines: dprint('Normal output:') for line in olines: dprint(line.rstrip()) errf.close() outf.close() res = subproc.wait() dprint('wait result=', res) return (res, olines) I call this function thusly: (res, o) = run_program('bcdedit /set {current} MSI forcedisable') This command works when I type it from a cmd window, and it works when I put it in a batch file and run it from a command window (as Administrator, of course). But when I run it from Python (as Administrator), Python claims it can't find the command, returning: bcdedit is not recognized as an internal or external command, operable program or batch file Also, if I trying running my batch file from Python (which works from the command line), it also fails. I've also tried it with the full path to bcdedit, with the same results. What is it about calling bcdedit from Python that makes it not found? Note that I can call other EXE files from Python, so I have some level of confidence that my Python code is sane ... but who knows. Any help would be most appreciated.

    Read the article

  • Non-english domain naming issues in programming

    - by Svend
    Most programming code, I imagine is written in english. But I'm curious how people handling the issue of naming herein. Alot of programming is done within some bussiness domain, usually with well established terms for certain procedures, items. I'm from Denmark for instance, and something I work alot with has a term called "indblikskode", which sorta translates to "insight code". So, do I use the line "string indblikskode = ..." in the C# code for some webservice related to this? Or do I try to use a translation, such as "insightcode"? The bussiness I'm in isn't even consistent in it's language, for instance using the term "organisatorisk enhed" (organizatorical unit), but just as often using the abbreviation "OU", which is obviously abbreviated from the english. How do other people handle this naming issue, while keeping consistent, and sane (in everything from simple variable names in your code, to database tables, to server names)? Duplicates: Should identifiers and comments be always in English or in the native language of the application and developers? Do you use another language instead of english ?

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • What database systems should an startup company consider?

    - by Am
    Right now I'm developing the prototype of a web application that aggregates large number of text entries from a large number of users. This data must be frequently displayed back and often updated. At the moment I store the content inside a MySQL database and use NHibernate ORM layer to interact with the DB. I've got a table defined for users, roles, submissions, tags, notifications and etc. I like this solution because it works well and my code looks nice and sane, but I'm also worried about how MySQL will perform once the size of our database reaches a significant number. I feel that it may struggle performing join operations fast enough. This has made me think about non-relational database system such as MongoDB, CouchDB, Cassandra or Hadoop. Unfortunately I have no experience with either. I've read some good reviews on MongoDB and it looks interesting. I'm happy to spend the time and learn if one turns out to be the way to go. I'd much appreciate any one offering points or issues to consider when going with none relational dbms?

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

  • Tips for Using Multiple Development Systems

    - by Tim Lytle
    When I travel, I don't pack up the desktop I use in the office and take it with me. Maybe I should, but I don't. However, since I'm a contract programmer I like to be able to work wherever I am: I'm mostly thinking of web development here. Version Control goes a long way in keeping sane and working on multiple projects on multiple systems (two or three computers); however, there are the issues of: IDE settings - different display sizes mean the IDE settings can't be completely synced, if at all. Database - if the database is 'external' (even if it's running on the same system, it's not in version control), how do you maintain the needed syncs of structure. Development Stack - Some projects need non-standard extensions, libraries, etc installed. Just an overview of some of the hassle involved with developing on multiple systems. I'll probably end up asking some specific questions, but I thought a CW style tips might reveal some things I would even think to ask about. Update: I guess this would also address tips to make upgrading/replacing your development system easier (something I've just done). So, one tip per answer please, so the 'top' tips are easy to find. How do you make it easier to develop on multiple systems, or to transfer work after upgrading/replaceing a development system?

    Read the article

  • What is wrong with this Fortran '77 snippet?

    - by notJim
    I've been tasked with maintaing some legacy fortran code, and I'm having trouble getting it to compile with gfortran. I've written a fair amount of Fortran 95, but this is my first experience with Fortran 77. This snippet of code is the problematic one: CHARACTER*22 IFILE, OFILE IFILE='TEST.IN' OFILE='TEST.OUT' OPEN(5,FILE=IFILE,STATUS='NEW') OPEN(6,FILE=OFILE,STATUS='NEW') common/pabcde/nfghi When I compile with gfortran file.FOR, all lines starting with the common statement are errors (e.g. Error: Unexpected COMMON statement at (1) for each following line until it hits the 25 error limit). I compiled with -Wall -pedantic, but fixing the warnings did not fix this problem. The crazy thing is that if I comment out all 4 lines starting with IF='TEST.IN', the program compiles and works as expected, but I must comment out all of them. Leaving any of them uncommented gives me the same errors starting with the common statement. If I comment out the common statement, I get the same errors, just starting on the following line. I am on OS X Leopard (not Snow Leopard) using gfortran. I've used this very system with gfortran extensively to write Fortran 95 programs, so in theory the compiler itself is sane. What the hell is going on with this code?

    Read the article

  • Website. VoteUp or VoteDown Videos. How to restrict users voting multiple times?

    - by DJDonaL3000
    Im working on a website (html,css,javascript, ajax, php,mysql), and I want to restrict the number of times a particular user votes for a particular video. Its similar to the YouTube system where you can voteUp or voteDown a particular video. Each vote involves adding a row to the video.votes table, which logs the time, vote direction(up or down), the client IPaddress( using PHP: $ip = $_SERVER['REMOTE_ADDR']; ), and of course the ID of the video in question. Adding votes is as simple as; (pseudocode): Javascript:onClick( vote( a,b,c,d ) ), which passes variables to PHP insertion script via ajax, and finally we replace the voteing buttons with a "Thank You For Voting" message. THE PROBLEM: If you reload/refresh the page after voting, you can vote again, and again, and again, you get the point. MY QUESTION: How do you limit the amount of times a particular user votes for a particular video?? MY THOUGHTS: Do you use cookies, and add a new cookie with the id of the video. And check for a cookie before you insert a new vote.? OR Before you insert the vote, do you use the IPaddress and the videoID to see if this same user(IP) has voted for this same video(vidID) in the past 24hrs(mktime), and either allow or dissallow the voteInsertion based on this query? OR Do you just not care? Take the assumption that most users are sane, and have better things to do than refresh pages and vote repeatedly.?? Any suggestions or ideas welcome.

    Read the article

< Previous Page | 5 6 7 8 9 10 11  | Next Page >