Search Results

Search found 3168 results on 127 pages for 'grand central dispatch'.

Page 114/127 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Database choices

    - by flobadob
    I have a prickly design issue regarding the choice of database technologies to use for a group of new applications. The final suite of applications would have the following database requirements... Central databases (more than one database) using mysql (myst be mysql due to justhost.com). An application to be written which accesses the multiple mysql databases on the web host. This application will also write to local serverless database (sqlite/firebird/vistadb/whatever). Different flavors of this application will be created for windows (.NET), windows mobile, android if possible, iphone if possible. So, the design task is to minimise the quantity of code to achieve this. This is going to be tricky since the languages used are already c# / java (android) and objc (iphone). Not too worried about that, but can the work required to implement the various database access layers be minimised? The serverless database will hold similar data to the mysql server, so some kind of inheritance in the DAL would be useful. Looking at hibernate/nhibernate and there is linq to whatever. So many choices!

    Read the article

  • Generating different randoms valid for a day on different independent devices?

    - by Pentium10
    Let me describe the system. There are several mobile devices, each independent from each other, and they are generating content for the same record id. I want to avoid generating the same content for the same record on different devices, for this I though I would use a random and make it so too cluster the content pool based on these randoms. Suppose you have choices from 1 to 100. Day 1 Device#1 will choose for the record#33 between 1-10 Device#2 will choose for the record#33 between 40-50 Device#3 will choose for the record#33 between 50-60 Device#1 will choose for the record#55 between 40-50 Device#2 will choose for the record#55 between 1-10 Device#3 will choose for the record#55 between 10-20 Device#1 will choose for the record#11 between 1-10 Device#2 will choose for the record#22 between 1-10 Device#3 will choose for the record#99 between 1-10 Day 2 Device#1 will choose for the record#33 between 90-100 Device#2 will choose for the record#33 between 1-10 Device#3 will choose for the record#33 between 50-60 They don't have access to a central server. Data available for each of them: IMEI (unique per mobile) Date of today (same on all devices) Record id (same on all devices) What do you think, how is it possible? ps. tags can be edited

    Read the article

  • Should I base my Embedded Linux product on Qt?

    - by Udi
    My company is developing a medical product. One of the components is a pda-like platform that will run embedded linux. We were considering Qt as the UI framework but found out that Qt is a lot more than that (we are not familiar with Qt). In general, the device needs to do the following: 1. Receive measurements over USB HID from another device (USB HID is used for convenience). 2. Process the measurements. 3. Store them in a database. 4. Interact with the user using the device's touch screen lcd. 5. Communicate (wi-fi, tcp-ip) with a central management station that collects the data and configures the device. 6. Include a web server to allow accessing the device via a browser. We intend to program in C++. My questions are: 1. Is that a good choice for such a device? 2. Assuming we choose Qt, how do we build our product? - Do we use Qt just as a GUI framework and write the application code in a separate process (passing messages between Qt and the application process)? - Do we write the entire application inside Qt, using all of the services the tool has to offer? - Another approach?

    Read the article

  • How to Implement Rich Document Editor for iPhone

    - by benjismith
    I'm just getting started on a new iPhone/iPad development project, and I need to display a document with rich styled text (potentially with embedded images). The user will touch the document, dragging to highlight individual words or multiline text spans. When the text is highlighted, a context menu will appear, letting them change the color of highlighting or add margin notes (or other various bits of structured metadata). If you're familiar with adding comments to a Word document (or annotating a PDF), then this is the same sort of thing. But in my case, the typical user will spend many many hours within the app, adding thousands (in some cases, tens of thousands) of small annotations to the central document. All of those bits of metadata will be stored locally awaiting synchronization with a remote web service. I've read other pieces of advice, where developers suggest creating a UIWebView control and passing it an HTML string. But that seems kind of clunky, especially with all the context-sensitivity that I want to include. Anyhow, I'm brand new to iPhone development and Objective-C, though I have ten years of software development experience, using a variety of languages on many different platforms, so I'm not worried about getting my hands dirty writing new functionality from scratch. But if anyone has experience building a similar kind of component, I'm interested in hearing strategies for enabling that kind of rich document markup and annotation.

    Read the article

  • Php string handling triks

    - by Dam
    Hi my question Need to get the 10 word before and 10 words after for the given text . i mean need to start the 10 words before the keyword and end with 10 word after the key word. Given text : "Twenty-three" The main trick : content having some html tags etc .. tags need to keep that tag with this content only . need to display the words from 10before - 10after content is bellow : <div id="hpFeatureBoxInt"><h3><a href="/go/homepage/i/int/news/world/1/-/news/1/hi/world/europe/8592190.stm">Suicide bombings hit Moscow Metro</a></h3><p>Past suicide bombings in Moscow have been blamed on Islamist rebels At least 35 people have been killed after two female suicide bombers blew themselves up on Moscow Metro trains in the morning rush hour,<h2><span class="dy">Top News Story</span></h2> officials say.<img height="150" width="201" alt="Emergency services carry a body from a Metro station in Moscow (29 March 2010)" src="http://wwwimg.bbc.co.uk/feedengine/homepage/images/_47550689_moscowap203_201x150.jpg">Twenty-three died in the first blast at 0756 (0356 GMT) as a<a href="#"> train stood </a>at the central Lubyanka station, beneath the offices of the FSB intelligence agency.About 40 minutes later, a second explosion ripped through a train at Park Kultury, leaving another 12 dead.No-one has said they carried out the worst attack in the capital since 2004. </p><p id="fbilisten"><a href="/go/homepage/i/int/news/heading/-/news/">More from BBC News</a></p></div> Thank you

    Read the article

  • Sharepoint Active directory forms authentication

    - by Sushant
    Hi, I am devloping a sharepoint website in Forms authentication mode. I am trying to authenticate myself/ my company users against company's active directory. The ldap path I received from my technical team is LDAP://infinmumcfac.inf.com OU=Infotech,DC=inf,DC=com I got this piece of code from microsoft site. <membership defaultProvider="LdapMembershipProvider"> <providers> <add name="LdapMembership" type="Microsoft.Office.Server.Security.LDAPMembershipProvider, Microsoft.Office.Server, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71E9BCE111E9429C" server="DC" port="389" useSSL="false" userDNAttribute="distinguishedName" userNameAttribute="sAMAccountName" userContainer="CN=Users,DC=userName,DC=local" userObjectClass="person" userFilter="(|(ObjectCategory=group)(ObjectClass=person))" scope="Subtree" otherRequiredUserAttributes="sn,givenname,cn" /> </providers> </membership> The site asked me to change the Server and Usercontainer attribute. I have modified the code to <membership defaultProvider="LdapMembershipProvider"> <providers> <add name="LdapMembership" type="Microsoft.Office.Server.Security.LDAPMembershipProvider, Microsoft.Office.Server, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71E9BCE111E9429C" server=” infinmumcfac.inf.com” port="389" useSSL="false" userDNAttribute="distinguishedName" userNameAttribute="sAMAccountName" userContainer=" OU=Infotech,DC=inf,DC=com " userObjectClass="person" userFilter="(|(ObjectCategory=group)(ObjectClass=person))" scope="Subtree" otherRequiredUserAttributes="sn,givenname,cn" /> </providers> </membership> I placed this code in web.config file of central administration site and my sharepoint website . I am still facing login issues. Any help or insight would be highly grateful.Thanking in anticipation.

    Read the article

  • Syncing Data to Remote Services, Best Practices for Caching?

    - by viatropos
    I want to be able to publish events to Eventbrite, Eventful, and Google Calendar for my Google Apps. Each service has slightly different properties for events... I will be syncing many other things too, such as users with Google Contacts and MailChimp, Documents with Google Docs and some other services, etc... So I'm wondering, what is the recommended way of retrieving the data for the end user so that it's reasonably maintainable and optimized? Here are the things I'm thinking that I'm having trouble with: My App keeps a central database of all the models (Event, Document, User, Form, etc.), and whenever Admin creates an object (e.g. create through Eventbrite or through our Admin panel), we sync them and store a copy in our local database. When User goes to the site /events, App retrieves the events from the database. Read Events from a target feed, such as the Eventbrite or Eventful feed, and scrap the local database. Basically, I'm wondering, if we're storing all of the data on a remote service, do we really need to have a local database copy of the data? When would we need to have a local database, when wouldn't we?

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Log in to subdomain via main domain

    - by Mattias
    I have a website, available through multiple domainnames. like www.domain1.com .... www.domain5.com All my customers have their own subdomain. like: customer1.domain1.com customer2.domain1.com .... customer351.domain4.com Currently i dont use SSL, each customer log in their own account via their sub domain. I want to change this, and make all customers log in on a central log in page, that would use SSL, for example. https://login.domain1.com And somehow redirect each user to the correct sub domain adress. (Sub domain that don't use SSL) How do I do this, and maintain security? One idea i had: Login - add random value somewhere in the database, Redirect to subdomain, with querystring the randomvalue. And after that the session takes care of it, Each value can be used once only.. But how secure is that? I guess someone would ask the question "why?" to me. Because SSL costs money. And unfortunately i dont have a lot of it. :D Thanks for your time!

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • Publishing content to multiple (unknown) domains using Open Graph?

    - by Beau Lebens
    I'm working on an application that publishes content ('articles') on a variety of URLs, which are all controlled by the same WordPress installation (mapped domains, all powered by the same set of code/part of a network). All of the publishing is done through one central Facebook App. I have no idea what the domains for these URLS are going to be, since they are controlled by our users who register domains and then configure them within their account on our service. When I attempt to use Open Graph to publish content on one of these sites (that has a customized domain), they are rejected with the following error (error code 1611028): Object at URL * * * * of type 'article' is invalid because the domain '* * * ' is not allowed for the specified application id ' * * *'. You can verify your configured 'App Domain' at.... Since I can't enter all of the domains into Facebook, and since they are not derived from my App URL anyway, is there any way that I can have this work? Some sort of magic OG tag I can put in the pages or something? Or is it just not possible to do what I'm trying to do?

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • And now for a complete change of direction from C++ function pointers

    - by David
    I am building a part of a simulator. We are building off of a legacy simulator, but going in different direction, incorporating live bits along side of the simulated bits. The piece I am working on has to, effectively route commands from the central controller to the various bits. In the legacy code, there is a const array populated with an enumerated type. A command comes in, it is looked up in the table, then shipped off to a switch statement keyed by the enumerated type. The type enumeration has a choice VALID_BUT_NOT_SIMULATED, which is effectively a no-op from the point of the sim. I need to turn those no-ops into commands to actual other things [new simulated bits| live bits]. The new stuff and the live stuff have different interfaces than the old stuff [which makes me laugh about the shill job that it took to make it all happen, but that is a topic for a different discussion]. I like the array because it is a very apt description of the live thing this chunk is simulating [latching circuits by row and column]. I thought that I would try to replace the enumerated types in the array with pointers to functions and call them directly. This would be in lieu of the lookup+switch.

    Read the article

  • Coldfusion 8 and HTTP PUT - is there a way to PUT an object?

    - by ciaranarcher
    Hi all We are using EHCache with CF 8 to cache stuff on a central server using a RESTful interface over HTTP. I am trying to cache a cfquery object to the cache server. I can get this to work if I call EHCache direct (i.e. store it in a local cache) but if I try to cache on a remote server over HTTP I am running into problems. The code I am using is as follows: <cfhttp url="http://localhost:8080/myCache/myKey" method="put" result="r" timeout="2" throwonerror="true" > <cfhttpparam type="body" value="#ARGUMENTS.item#" /> </cfhttp> CF doesn't like this reference to #ARGUMENTS.item# and it complains Complex object types cannot be converted to simple values. Can anyone give me an example of how to put an object over http using CF? If this is not possible with CF then a Java example would be the next best thing. Many thanks in advance! PS: I do not want to use serialization to text/JSON etc. as this approach has issues with data integrity and most importantly it's not fast enough.

    Read the article

  • Why isn't obliterate an essential feature of Subversion?

    - by Dimitri C.
    For some years now, I'm waiting for Subversion to feature a "delete permanently" (obliterate) function. I hesitate to make the transition to Subversion (coming from Visual SourceSafe :p), because I think this is an essential feature, as otherwise I'd expect the repository to grow unstopably. However, for one reason or the other, the feature gets postponed over and over again. So I begin wondering if there is some other feature or workaround which makes the obliterate function dispensable. What do you do when you want to shrink the SVN central repository? Example 1: I check in a large third party library, and after a few weeks I realize it is not suited for my needs. I don't want that to store and backup that large amount of data forever. Example 2: I have 10 versions of 10 big third party libraries in the repository, but I only use the latest versions. Example 3: I accidentally checked in sensitive information (as suggested by John). Example 4: I accidentally checked in some big files that were never meant to be put in the repository.

    Read the article

  • Choosing approach for an IM client-server app

    - by John
    Update: totally re-wrote this to be more succint. I'm looking at a new application, one part of which will be very similar to standard IM clients, i.e text chat, ability to send attachments, maybe some real-time interaction like a multi-user whiteboard. It will be client-server, i.e all traffic goes through my central server. That means if I want to support cross-communication with other IM systems, I am still free to pick any protocol for my own client<--server communication - my server can use XMPP or whatever to talk to other systems. Clients are expected to include desktop apps, but probably also browser-based as well either through Flex/Silverlight or HTML/AJAX. I see 3 options for my own client-server communication layer: XMPP. The benefits are clients already exist as do open-source servers. However it requires the most up-front research/learning and also appears like it might raise legal issues due to GPL. Custom sockets. A server app makes connections with the clients, allowing any text/binary data to be sent very fast. However this approach requires building said server from scratch, and also makes a JS client tricky Servlets (or similar web server). Using tried and tested Java web-stack, clients send HTTP requests similar to AJAX-based websites. The benefit is the server is easy to write using well-established technologies, and easy to talk to. But what restrictions would this bring? Is it appropriate technology for real-time communication? Advice and suggests are welcome, especially what pros and cons surround using a web-server approach as compared to a socket-based approach.

    Read the article

  • Is it good practise to blank out inherited functionality that will not be used?

    - by Timo Kosig
    I'm wondering if I should change the software architecture of one of my projects. I'm developing software for a project where two sides (in fact a host and a device) use shared code. That helps because shared data, e.g. enums can be stored in one central place. I'm working with what we call a "channel" to transfer data between device and host. Each channel has to be implemented on device and host side. We have different kinds of channels, ordinary ones and special channels which transfer measurement data. My current solution has the shared code in an abstract base class. From there on code is split between the two sides. As it has turned out there are a few cases when we would have shared code but we can't share it, we have to implement it on each side. The principle of DRY (don't repeat yourself) says that you shouldn't have code twice. My thought was now to concatenate the functionality of e.g. the abstract measurement channel on the device side and the host side in an abstract class with shared code. That means though that once we create an actual class for either the device or the host side for that channel we have to hide the functionality that is used by the other side. Is this an acceptable thing to do: public abstract class MeasurementChannelAbstract { protected void MethodUsedByDeviceSide() { } protected void MethodUsedByHostSide() { } } public class DeviceMeasurementChannel : MeasurementChannelAbstract { public new void MethodUsedByDeviceSide() { base.MethodUsedByDeviceSide(); } } Now, DeviceMeasurementChannel is only using the functionality for the device side from MeasurementChannelAbstract. By declaring all methods/members of MeasurementChannelAbstract protected you have to use the new keyword to enable that functionality to be accessed from the outside. Is that acceptable or are there any pitfalls, caveats, etc. that could arise later when using the code?

    Read the article

  • How can I simplify this user interface?

    - by Bears will eat you
    I'm writing an internal-tools webapp; one of the central pages in this tool has a whole bunch of related commands the user can execute by clicking one of a number of buttons on the page, like this: Ideally, all of the buttons would fit on one line. Ordinarily I'd do this by changing each widget from a button with a (sometimes long) text label to a simple, compact icon - e.g. could be replaced by a familiar disk icon: Unfortunately, I don't think I can do this for every button on this particular page. Some of the command buttons just don't have good visual analogs - "VDS List". Or, if I needed to add another button in the future for some other kind of list, I'd need two icons that both communicate "list-ness" and which list. So, I'm still considering this option, but I don't love it. So it's come time for me to add yet another button to this section (don't you love internal tools?). There's not enough room on that single line to fit the new button. Aside from the icon solution I already mentioned, what would be a good* way to simplify/declutter/reduce or otherwise improve this UI? *As per Jakob Nielsen's article, I'd like to think that a dropdown menu is not the solution.

    Read the article

  • Bad idea to force creation of Mercurial remote heads (ie. branches)?

    - by Chad Johnson
    I am developing a centralized web application, and I have a centralized Mercurial repository. Locally I created a branch in my repository hg branch my_branch I then made some changes and committed. Then when I try to push, I get abort: push creates new remote branch 'my_branch'! (did you forget to merge? use push -f to force) I've just been using push -f. Is this bad? I WANT multiple branches in my central, remote repository, as I want to 1) back up my work and 2) allow other developers to develop with me on that branch. Is it bad or something to have branches in my remote repository or something? Should I not be doing push -f (and if not, what should I do?)? Why does Joel say this in his tutorial: Occasionally I've made a change in a branch, pushed, switched to another branch, and changes I had made in that branch I switch to were mysteriously reverted to a previous version from several commits ago. Maybe this is a symptom of forcing a push?

    Read the article

  • Plugin-like architecture in .NET

    - by devoured elysium
    I'm trying to implement a plug-in like application. I know there are already several solution out there but this is just going to be proof of the concept, nothing more. The idea would be to make the application main application almost featureless by default and then let the plugins know about each other, having them have implement all the needed features. A couple of issues arise: I want the plugins at runtime to know about each other through my application. That wouldn't mean that at code-time they couldn't reference other plugin's assemblies so they could use its interfaces, only that plugin-feature initialization should be always through my main app. For example: if I have both plugins X and Y loaded and Y wants to use X's features, it should "register" its interest though my application to use its features. I'd have to have a kind of "dictionary" in my application where I store all the loaded plugins. After registering for interest in my application, plugin Y would get a reference to X so it could use it. Is this a good approach? When coding plugin Y that uses X, I'd need to reference X's assembly, so I can program against its interface. That has the issue of versioning. What if I code my plugin Y against an outdated version of plugin X? Should I always use a "central" place where all assemblies are, having there always the up to date versions of the assemblies? Are there by chance any books out there that specifically deal with these kinds of designs for .NET? Thanks

    Read the article

  • SharePoint 2007 - Content deployment and swapping content database

    - by Mel Lota
    Hi all, I'm currently working on a SharePoint 2007 site which is setup to allow clients to author content on a staging server and then this is automatically pushed up to the live environment via content deployment. The content deployment is setup in the 'Content deployment jobs and paths' in central admin. Now the problem I've got is that it seems that historically there have been a mixture of full and incremental deployments done to the live site collection which according to Stefan Goßner's best practices post (http://blogs.technet.com/stefan_gossner/pages/content-deployment-best-practices.aspx) is a bad idea due to the fact that things soon become out of sync. It's gotten to the point where the content deployment has just stopped working and incremental or full deployments are throwing errors in the logs. What I'm thinking is that I probably need to perform a full content deployment to an empty site collection and then somehow switch the new clean site collection with the current live one. I was wondering if anybody has any experience with this and could provide any pointers, I'm currently investigating the feasibility of performing the clean content deployment and then switching the live content database with the new one, however in my tests I've found that as soon as I switch content databases, the incremental deployment still fails. Any help much appreciated. (Note: I did post this on SharePoint Overflow as well, but thought I'd put it on here in case anybody else has any ideas) Cheers

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Text replacement for multiple script/config files

    - by user343804
    I am doing some sort of science with a combination of Python, bash scripts and configuration files with weird syntaxes from old C programs. To run different tests, I have to tune (open text file, edit and save) a number of these files, but it is getting harder and harder to remember which files to edit. Also, the tasks are very repetitive and I end having a bunch of comment lines, switching on and off the settings for a specific run. I dream of a tool that can keep a list of files (regardless of the file format) and allow me to centrally choose from a predefined list of "profiles", taking care of editing the script/config files. For example lets say I have a config and script file: config.cfg var1 = 1.0 #var1 = 1.5 script.sct function(200.0) #function(300.0) I would like to instead have auxiliary text files: config.cfg.tem var1 = $$var1$$ script.sct.tem function($$par1$$) and in a central script or even better, a GUI, just switch from profile1 (e.g. var1=1.0,par1=200.0) to profile2 (e.g. var1=1.5,par1=300.0), and the tool to automatically update the text files. Any idea if such a tools exists?

    Read the article

  • Cryptography for P2P card game

    - by zephyr
    I'm considering writing a computer adaptation of a semi-popular card game. I'd like to make it function without a central server, and I'm trying to come up with a scheme that will make cheating impossible without having to trust the client. The basic problem as I see it is that each player has a several piles of cards (draw deck, current hand and discard deck). It must be impossible for either player to alter the composition of these piles except when allowed by the game rules (ie drawing or discarding cards), nor should players be able to know what is in their or their oppponent's piles. I feel like there should be some way to use something like public-key cryptography to accomplish this, but I keep finding holes in my schemes. Can anyone suggest a protocol or point me to some resources on this topic? [Edit] Ok, so I've been thinking about this a bit more, and here's an idea I've come up with. If you can poke any holes in it please let me know. At shuffle time, a player has a stack of cards whose value is known to them. They take these values, concatenate a random salt to each, then hash them. They record the salts, and pass the hashes to their opponent. The opponent concatenates a salt of their own, hashes again, then shuffles the hashes and passes the deck back to the original player. I believe at this point, the deck has been randomized and neither player can have any knowledge of the values. However, when a card is drawn, the opponent can reveal their salt, allowing the first player to determine what the original value is, and when the card is played the player reveals their own salt, allowing the opponent to verify the card value.

    Read the article

  • Do Distributed Version Control Systems promote poor backup habits?

    - by John
    In a DVCS, each developer has an entire repository on their workstation, to which they can commit all their changes. Then they can merge their repo with someone else's, or clone it, or whatever (as I understand it, I'm not a DVCS user). To me that flags a side-effect, of being more vulnerable to forgetting to backup. In a traditional centralised system, both you as a developer and the people in charge know that if you commit something, it's held on a central server which can have decent backup solutions in place. But using a DVCS, it seems you only have to push your work to a server when you feel like sharing it. It's all very well you have the repo locally so you can work on your feature branch for a month without bothering anyone, but it means (I think) that checking in your code to the repo is not enough, you have to remember to do regular pushes to a backed-up server. It also means, doesn't it, that a team lead can't see all those nice SVN commit emails to keep a rough idea what's going on in the code-base? Is any of this a real issue?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >