Search Results

Search found 25885 results on 1036 pages for 'claims based identity'.

Page 635/1036 | < Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >

  • Virtualized Development Server for simulating 3-Tier Environment

    - by chris.cyvas
    Hello, I am thinking about buying a new server based development box for development (redundantly redundant, I know ;)). Ideally, I want to run something like ESXi or Xen Hypervisor at the lowest level. Then I want to add (at least) 5 Linux VM's for the following uses: 2 Web Servers 2 Application Servers 1 Database Server I want to load balance the 2 web servers and the 2 application servers and (somewhat obviously) they need to be all networked together to simulate a production environment. Also, it used to be the case that the recommendation was to put each VM on it's own hard drive, but I'm not sure that holds water anymore. Any advice? Does anyone have any advice on how to pull this off? Gotchya's, LookOuts!, etc? Thanks!

    Read the article

  • Good podcasting solution?

    - by burnso
    I simply ask for the best, most common and simple way to set up a podcast? Please, I need an answer so I can close this question? I run a joomla-based website for a small church and now need a simple, cheap and effective solution for an audio podcast. I am looking for a solution that will do the following: User uploads audio files to service (preferably not to our own site) that is cheap, fast and simple to use. Dropbox for example? Files are easily embeddable into Joomla website articles to be played on the spot (through simple-to-use and media player). Preferably through RSS feeds (to make it easy to update every week). Files are downloadable. Files are viewable on iPhones and other smartphones. Solution can be broadcasted via iTunes. Solution doesn't need a lot of extra, new third party software. I'd rather keep it simple and familiar than have it be a complete new system but with a steep learning curve. At the moment we use vimeo to host the podcast, through video files. But we'd like to move to something simpler that doesn't involve a series of difficult steps to upload the files to the web.

    Read the article

  • Two keyboards - one qwerty one dvorak. Can windows use both simultaneously?

    - by Alain
    Similar to this question: Is it possible to use multi keyboards with multi keyboard layouts simultaneously? - I have two keyboards plugged in simultaneously. Both are querty keyboards, but I've rearranged the keys on one to be a dvorak layout. I'd like to find some way to configure windows (XP) such that while both keyboards are plugged in, typing on each results in it's respective layout. I'd prefer if there was someway to have the system automatically use the desired layout based on which USB keyboard it is receiving input from. If no such solution is possible, I would settle for a fast way of switching between US English and US English Dvorak languages so that in a pinch I can easily go from one to the other. Thanks.

    Read the article

  • Why isn't Adobe software multilingual?

    - by Takowaki
    I work in a design studio with several non-native English speakers (in this case, into Japanese and Chinese). I have installed the latest Creative Suite (CS5) on our mac stations and was once again disappointed that unlike so many modern software packages there is still no option to change the language of the software. Most of the team has been good enough to work on their English, but it would be much more helpful for them to work in their native language. Why does Adobe continue to require separate licenses based on language? Are they operating under the assumption that only a single language is ever spoken in any given country? Are there any other third party options or does Adobe at least have some sort of statement regarding this policy?

    Read the article

  • Changing the Default Install Location of an MSI

    - by PSteele
    A few months ago, I had to tweak an MSI installer.  It was installing into a specific directory (named the same as the application) underneath Program Files.  Since the location of Program Files can change from machine to machine, the MSI has a special token you can use for Program Files (as well as for the application name).  So the current value for “DefaultLocation” of the Application Folder was: [ProgramFilesFolder]\[ProductName] During installation, these tokens would be replaced by the actual location based on the current machine. I needed to change this to a specific folder underneath the users My Documents directory.  I poked around the help file and I could not find where these special tokens (like “[ProgramFilesFolder]”) were defined.  Obviously, there must be some specific set of values that are available and I’m sure My Documents is one of them. I finally found them documented so I’m posting the link here.  Hopefully, it will help someone else out.  Not sure where I found this link… System Folder Properties For me, it was as easy as changing the DefaultLocation to: [PersonalFolder]\MyToolName\Application Technorati Tags: .NET,MSI

    Read the article

  • How can I make Bash (or Zsh) run a particular command before each entered command?

    - by Peeja
    I'd like to configure Bash to run a particular command before running each command line I enter at the prompt. Specifically, I'd like to tell Vim (which is running in another terminal) to write all open buffers, because in my workflow if anything's unsaved when I leave Vim it's a mistake. Is there an option for this in Bash? If not, is there an option in Zsh? (There is a readline-based solution that somewhat fits this problem on another question, but it feels a bit hacky. It'll take it as a last resort.)

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • XML Rules Engine and Validation Tutorial with NIEM

    - by drrwebber
    Our new XML Validation Framework tutorial video is now available. See how to easily integrate code-free adaptive XML validation services into your web services using the Java CAMV validation engine. CAMV allows you to build fault tolerant content checking with XPath that optionally use SQL data lookups. This can provide warnings as well as error conditions to tailor your validation layer to exactly meet your business application needs. Also available is developing test suites using Apache ANT scripting of validations.  This allows a community to share sets of conformance checking test and tools . On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content and structure validation. CAM validation templates allow contextual parameter driven dynamic validation services to be implemented compared to using a static and brittle XSD schema approach.The SQL table lookup and code list validation are discussed and examples presented.Features are highlighted along with a demonstration of the interactive generation of actual live XML data from a SQL data store and then validation processing complete with errors and warnings detection.The presentation provides a primer for developing web service XML validation and integration into a SOA approach along with examples and resources. Also alignment with the NIEM IEPD process for interoperable information exchanges is discussed along with NIEM rules services.The CAMV engine is a high performance scalable Java component for rapidly implementing code-free validation services and methods. CAMV is a next generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition. See: http://www.youtube.com/user/TheCAMeditor

    Read the article

  • Oracle Transportation Management (Lead) Functional Consultant in Germany

    - by user769227
    My name is Giovanni and I lead the practice of OTM (Oracle Transportation Management) consultants in Western Europe. I currently have a role open for an OTM Lead Consultant to join my international team in Germany. Oracle Transportation Management is the leading TMS application software in the market, as confirmed by Gartner’s classification as LEADER of its TMS Magic Quadrant with the highest rating among vendors. The OTM Consulting practice is a team of OTM functional and technical specialists located across Europe whose broad objective is to assist companies in the implementation of their TMS solution based on OTM. These companies are leading Shippers of various industries and Logistic Service Providers. Key requirements for this role are: relevant experience with Supply Chain or Transportation Management in other consulting organizations or large enterprises, the drive to learn the leading TMS application software in today’s market and the interest to join a truly international team. We offer the opportunity to work for a leader of the IT Industry and assist international clients to realize their business transformation initiatives through innovation. If you have an entrepreneurial spirit, and are you looking for a work culture where innovation is the goal, hard work is expected, and creativity is rewarded then please visit this link for more information.

    Read the article

  • UDK - How to make sure a PhysicalMaterial mask actually works?

    - by tomacmuni
    Hello, I have been reading the documentation for UDK about physical materials and masks. I have my 1bit BMP mask, and the two physical material assets I want to shoot off in the black and white channels. I have applied my material to both a rigid body and to a skeletal mesh and neither apparently uses the mask. If I assign a regular physical material (one that doesn't use a mask) then it will work fine, but this defeats the point because it gives only one hit reaction. In the documentation it states that it is possible to extend a class on which we want to use a physical material based on the KActor class's usage. How to do that? Here is the quote: "The following properties [ie, ImpactEffect - Particle system to spawn at the point of impact + ImpactSound - Sound to play when an impact occurs] allow you to attach sounds and effects to physical collisions. These only work on classes which support them, which at the moment is only KActor. By looking at the implementation in KActor though, you can add this functionality to other classes (or you can subclass KActor)." Essentially, how to make sure a PhysicalMaterial mask actually works? What code could be added to a skeletal mesh class perhaps, to get it going? Any help appreciated.

    Read the article

  • Choosing a monitoring system for a dynamically scaling environment: Nagios v. Zabbix

    - by wickett
    When operating in the cloud and scaling boxes automatically, there are certain monitoring issues that one experiences. Sometimes we might be monitoring 10 boxes and sometimes 100. The machines will scale up and down based on a demand. Right now, I think the best solution to this is to choose a monitoring solution that will instantiation of targets via calls to an API. But, is this really the best? I like the idea of dynamic discovery, but that is also a problem in the cloud seeing that the targets are not all in the same subnet. What monitoring solutions allow for a scaling environment like this? Zabbix currently has a draft API but I have been unable to fund a similar API for Nagios. Is there a similar API for Nagios? Anyone have any alternate suggestions besides Nagios and Zabbix?

    Read the article

  • Reverse proxy with SSL and IP passthrough?

    - by Paul
    Turns out that the IP of a much-needed new website is blocked from inside our organization's network for reasons that will take weeks to fix. In the meantime, could we set up a reverse proxy on an Internet-based server which will forward SSL traffic and perhaps client IPs to the external site? Load will be light. No need to terminate SSL on the proxy. We may be able to poison DNS so original URL can work. How do I learn if I need URL rewriting? Squid/apache/nginx/something else? Setup would be fastest on Win 2000, but other OSes are OK if that would help. Simple and quick are good since it's a temporary solution. Thanks for your thoughts!

    Read the article

  • Easiest way to move my Windows installation to an SSD?

    - by Jon Artus
    I've taken the plunge and bought an SSD and want to move my existing Windows installation over. The current hard disk is 500Gb, but I've trimmed the contents down to about ~40Gb. I'm transferring it across to a 100Gb SSD and looking for the easiest way just to copy everything across and set the SSD up as a boot device. I've looked at a few tools like Macrium Reflect, but they don't seem able to restore to a smaller drive. Do I need to go for something like PING to do this? I'm trying to avoid scary Linux-based boot utilities if possible, does anyone know of an easier way?

    Read the article

  • What is the recommended MongoDB schema for this quiz-engine scenario?

    - by hughesdan
    I'm working on a quiz engine for learning a foreign language. The engine shows users four images simultaneously and then plays an audio file. The user has to match the audio to the correct image. Below is my MongoDB document structure. Each document consists of an image file reference and an array of references to audio files that match that image. To generate a quiz instance I select four documents at random, show the images and then play one audio file from the four documents at random. The next step in my application development is to decide on the best document schema for storing user guesses. There are several requirements to consider: I need to be able to report statistics at a user level. For example, total correct answers, total guesses, mean accuracy, etc) I need to be able to query images based on the user's learning progress. For example, select 4 documents where guess count is 10 and accuracy is <=0.50. The schema needs to be optimized for fast quiz generation. The schema must not cause future scaling issues vis a vis document size. Assume 1mm users who make an average of 1000 guesses. Given all of this as background information, what would be the recommended schema? For example, would you store each guess in the Image document or perhaps in a User document (not shown) or a new document collection created for logging guesses? Would you recommend logging the raw guess data or would you pre-compute statistics by incrementing counters within the relevant document? Schema for Image Collection: _id "505bcc7a45c978be24000005" date 2012-09-21 02:10:02 UTC imageFileName "BD3E134A-C7B3-4405-9004-ED573DF477FE-29879-0000395CF1091601" random 0.26997075392864645 user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioFiles [ 0 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "English" date 2012-09-22 01:15:04 UTC } 1 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "Spanish" date 2012-09-22 01:17:04 UTC } ]

    Read the article

  • Getting error message when trying to start a virtual machine

    - by Sunil J
    I have been using VMWare on Windows for a long time. But after a long wait, I moved to VirtualBox on Ubuntu 11.10. I installed Ubuntu, 32 Bit, installed all available updates and installed Virtual Box. When I try to create a new Windows installation inside VirtualBox, I got the following error messages. 1st error dialogue VirtualBox - Error Failed to open a session for the virtual machine Windows XP.<br> The virtual machine '**Windows XP**' has terminated unexpectedly during startup with exit code 1.<p> Details<p> Result Code: <br> NS_ERROR_FAILURE (0x80004005)<br> Component: <br> Machine<br> Interface: <br> IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}<p> 2nd error dialogue **Virtualbox - Error in suplibOsinit**<br> **Kernal driver not installed (rc--1908)**<br> Please install the virtualbox-dkmbs package and execute 'modprobe vboxdrv' as root.<p> Steps I tried I have already tried reinstalling VirtualBox. Google result seem to indicate the the problem happens due to Kernel updates. Is there anyway I can get this working? I need this for malware analysis and if VirtualBox is going to crash on me all the time, then I won't be able to use Ubuntu for work. Output to dpkg -l | grep virtual server rc virtualbox 4.1.2-dfsg-1ubuntu1 x86 virtualization solution - base binaries rc virtualbox-qt 4.1.2-dfsg-1ubuntu1 x86 virtualization solution - Qt based user interface cute 'modprobe vboxdrv' as root.<p>

    Read the article

  • Advertising on personalized pages behind a login

    - by johneth
    I am currently building a web app which requires a user to log in. After they log in, they can see the content they've added to the web app, and things that the web app has done with the content they added. The URL structure won't differentiate different users (e.g. all user's 'homepage' would be example.com/home, not something like example.com/username/home). This is much the same way that Facebook works (all FB user's messages are at facebook.com/messages, for example). This presents a problem with advertising. I know that you can use AdSense behind a login, but as far as I'm aware, that's for things like forums, where everyone sees the same things (which wouldn't be the case in this site). I also know that I could put AdSense on the pages without allowing it to log in, which would produce inferior ads. I'm fairly certain it would be against the Terms of Service to give AdSense a login to a 'dummy' account with typical content, as it would not be seeing the same thing as every other user (which is impossible, as they all see different things). So, my question is: Is there an ad network, or other method, that can serve ads behind a login, maybe based on keywords rather than content?

    Read the article

  • MS Marketing Strategy

    - by Aaron Kowall
    I found this week’s Windows Phone 8 event interesting.  Not just because it looks like some fantastic new features in the new OS but because of the wait for release.  If I were a Nokia shareholder (which I am not) I’d be very unhappy with MS announcing that Windows Phone 8 will NOT work with current hardware.  So, there are some very nice Lumia devices that are now end-of-life that have arrived relatively recently at carriers and retailers. I understand that MS needs to demonstrate progress against iOS and Android and that there is some Windows 8 tie-in that they are trying to capitalize (and MS IS still all about Windows).  However, it’s a bit of a kick to partners that have invested in the platform with pretty decent devices (Samsung, HTC and of course Nokia). Personally, I’m still using a Samsung Foucs.  I was seriously considering upgrading to a Lumia 900 (we just got Lync mobile available) but will now wait it out until new devices arrive with Windows 8.  If MS had waited to announce, I would happily have upgraded to the Lumia and when I found out it couldn’t be upgraded then that would be a gamble I took and lost and I’d live with it.  Now, however, I can see the future and know that waiting is the better option for me so that is 1 sale Nokia will miss out on.  Based on some chats I’ve seen on mobile forums I’m certainly far from the only one. I’m sure glad I’m not in charge of marketing at MS.  There are tough decisions to be made there and I’m pretty sure you piss somebody off regardless. Technorati Tags: WP8,Lumia,Nokia,Samsung

    Read the article

  • Getting data from closed files with concatenate formula

    - by Pav
    Each day a program is creating an excel file for me with some data for the current day. Like what is the price for products, how many people are available today and things like that. Based on all this I need to make some forecasts and workplace allocations for workers. The problem is, that I need to drag all this information manually all the time. So to make it automatic I placed the formula in cells like: ='c:\ABC\[ABC 29-01-14.xlsx]sheet'!a1 Everything works fine, but next day I have to change file name for "ABC 30-01-14" for each cell, what is the same as entering the data manually. So I used "concatenate" formula to change date according to today's date automatically. I used "indirect" formula to turn it in to a real formula, not text string, and realized that it is working only for open files, not closed. Is there any way to do this for closed files without VBA, because I don't know it, or with VBA but explained for an idiot.

    Read the article

  • Synchronizing 3 servers over IP

    - by user93078
    I'm setting up a medical server for a hospital that has doctors located in 3 different locations, meaning there would be 3 servers (1 in each location). All 3 servers would just have the following software: Ubuntu Server 12.04 minimal MySQL, PHP 5, Apache The medical software which would read/write to the MySQL database Remote admin apps like Nagios & Webmin Rsync for backup (rsync-over-ssh) as a cron job and the doctors at each location would access patient & billing data from their respective servers. What I'd like is, that each of these servers all have synchronized info (especially the mySQL database's) - let's say on an hourly basis each of these servers synchronize data to a common remote server and the data is then brought down to each of the servers. I know an easier way would be to have the medical app running on a remote web server, but since this is medical that we're talking about and knowing how common it is in our area for the net to go gown, I wouldn't like a web based scenatio. Is such a setup possible? Would this be the right way to do things or is there a better way to this? Would really appreciate views and comments (or how to set this up) on this.

    Read the article

  • 64 bit Windows 7 + 32 bit windows XP dual boot?

    - by Mick
    I have purchased an i7 based PC pre-installed with 64 bit windows 7 (home premium). Unfortunately some third party 32 bit software that I need to use is not working properly (see stackoverflow.com for details). I am now torn between the plan of installing windows XP 32 bit or making it dual boot. Which option do you think will give me the least problems? And if the answer is dual boot, then can you point me to a good guide for how to do it, preferably a guide specifically for my two OS's created in this order (i.e. 7x64 first). EDIT: the performance of my 32bit programs is critical so am concerned about any kind of 32bit XP "emulation".

    Read the article

  • Is there a standard way to track 2d tile positions both locally and on screen?

    - by Magicked
    I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering: // enable textures since we're going to use these for our sprites glEnable(GL_TEXTURE_2D); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // enable alpha blending glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); glMatrixMode(GL_MODELVIEW); I assume I need to change: glOrtho(0, WIDTH, HEIGHT, 0, 1, -1); to: glOrtho(0, WIDTH, 0, HEIGHT, 1, -1);

    Read the article

  • Local LINQtoSQL Database For Your Windows Phone 7 Application

    - by Tim Murphy
    There aren’t many applications that are of value without having some for of data store.  In Windows Phone development we have a few options.  You can store text directly to isolated storage.  You can also use a number of third party libraries to create or mimic databases in isolated storage.  With Mango we gained the ability to have a native .NET database approach which uses LINQ to SQL.  In this article I will try to bring together the components needed to implement this last type of data store and fill in some of the blanks that I think other articles have left out. Defining A Database The first things you are going to need to do is define classes that represent your tables and a data context class that is used as the overall database definition.  The table class consists of column definitions as you would expect.  They can have relationships and constraints as with any relational DBMS.  Below is an example of a table definition. First you will need to add some assembly references to the code file. using System.ComponentModel;using System.Data.Linq;using System.Data.Linq.Mapping; You can then add the table class and its associated columns.  It needs to implement INotifyPropertyChanged and INotifyPropertyChanging.  Each level of the class needs to be decorated with the attribute appropriate for that part of the definition.  Where the class represents the table the properties represent the columns.  In this example you will see that the column is marked as a primary key and not nullable with a an auto generated value. You will also notice that the in the column property’s set method It uses the NotifyPropertyChanging and NotifyPropertyChanged methods in order to make sure that the proper events are fired. [Table]public class MyTable: INotifyPropertyChanged, INotifyPropertyChanging{ public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(string propertyName) { if(PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangingEventHandler PropertyChanging; private void NotifyPropertyChanging(string propertyName) { if(PropertyChanging != null) { PropertyChanging(this, new PropertyChangingEventArgs(propertyName)); } } private int _TableKey; [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "INT NOT NULL Identity", CanBeNull = false, AutoSync = AutoSync.OnInsert)] public int TableKey { get { return _TableKey; } set { NotifyPropertyChanging("TableKey"); _TableKey = value; NotifyPropertyChanged("TableKey"); } } The last part of the database definition that needs to be created is the data context.  This is a simple class that takes an isolated storage location connection string its constructor and then instantiates tables as public properties. public class MyDataContext: DataContext{ public MyDataContext(string connectionString): base(connectionString) { MyRecords = this.GetTable<MyTable>(); } public Table<MyTable> MyRecords;} Creating A New Database Instance Now that we have a database definition it is time to create an instance of the data context within our Windows Phone app.  When your app fires up it should check if the database already exists and create an instance if it does not.  I would suggest that this be part of the constructor of your ViewModel. db = new MyDataContext(connectionString);if(!db.DatabaseExists()){ db.CreateDatabase();} The next thing you have to know is how the connection string for isolated storage should be constructed.  The main sticking point I have found is that the database cannot be created unless the file mode is read/write.  You may have different connection strings but the initial one needs to be similar to the following. string connString = "Data Source = 'isostore:/MyApp.sdf'; File Mode = read write"; Using you database Now that you have done all the up front work it is time to put the database to use.  To make your life a little easier and keep proper separation between your view and your viewmodel you should add a couple of methods to the viewmodel.  These will do the CRUD work of your application.  What you will notice is that the SubmitChanges method is the secret sauce in all of the methods that change data. private myDataContext myDb;private ObservableCollection<MyTable> _viewRecords;public ObservableCollection<MyTable> ViewRecords{ get { return _viewRecords; } set { _viewRecords = value; NotifyPropertyChanged("ViewRecords"); }}public void LoadMedstarDbData(){ var tempItems = from MyTable myRecord in myDb.LocalScans select myRecord; ViewRecords = new ObservableCollection<MyTable>(tempItems);}public void SaveChangesToDb(){ myDb.SubmitChanges();}public void AddMyTableItem(MyTable newScan){ myDb.LocalScans.InsertOnSubmit(newScan); myDb.SubmitChanges();}public void DeleteMyTableItem(MyTable newScan){ myDb.LocalScans.DeleteOnSubmit(newScan); myDb.SubmitChanges();} Updating existing database What happens when you need to change the structure of your database?  Unfortunately you have to add code to your application that checks the version of the database which over time will create some pollution in your codes base.  On the other hand it does give you control of the update.  In this example you will see the DatabaseSchemaUpdater in action.  Assuming we added a “Notes” field to the MyTable structure, the following code will check if the database is the latest version and add the field if it isn’t. if(!myDb.DatabaseExists()){ myDb.CreateDatabase();}else{ DatabaseSchemaUpdater dbUdater = myDb.CreateDatabaseSchemaUpdater(); if(dbUdater.DatabaseSchemaVersion < 2) { dbUdater.AddColumn<MyTable>("Notes"); dbUdater.DatabaseSchemaVersion = 2; dbUdater.Execute(); }} Summary This approach does take a fairly large amount of work, but I think the end product is robust and very native for .NET developers.  It turns out to be worth the investment. del.icio.us Tags: Windows Phone,Windows Phone 7,LINQ to SQL,LINQ,Database,Isolated Storage

    Read the article

  • Interface (contract), Generics (universality), and extension methods (ease of use). Is it a right design?

    - by Saeed Neamati
    I'm trying to design a simple conversion framework based on these requirements: All developers should follow a predefined set of rules to convert from the source entity to the target entity Some overall policies should be able to be applied in a central place, without interference with developers' code Both the creation of converters and usage of converter classes should be easy To solve these problems in C# language, A thought came to my mind. I'm writing it here, though it doesn't compile at all. But let's assume that C# compiles this code: I'll create a generic interface called IConverter public interface IConverter<TSource, TTarget> where TSource : class, new() where TTarget : class, new() { TTarget Convert(TSource source); List<TTarget> Convert(List<TSource> sourceItems); } Developers would implement this interface to create converters. For example: public class PhoneToCommunicationChannelConverter : IConverter<Phone, CommunicationChannle> { public CommunicationChannel Convert(Phone phone) { // conversion logic } public List<CommunicationChannel> Convert(List<Phone> phones) { // conversion logic } } And to make the usage of this conversion class easier, imagine that we add static and this keywords to methods to turn them into Extension Methods, and use them this way: List<Phone> phones = GetPhones(); List<CommunicationChannel> channels = phones.Convert(); However, this doesn't even compile. With those requirements, I can think of some other designs, but they each lack an aspect. Either the implementation would become more difficult or chaotic and out of control, or the usage would become truly hard. Is this design right at all? What alternatives I might have to achieve those requirements?

    Read the article

  • Mouse Speed in GLUT and OpenGL?

    - by CroCo
    I would like to simulate a point that moves in 2D. The input should be the speed of the mouse, so the new position will be computed as following new_position = old_position + delta_time*mouse_velocity As far as I know in GLUT there is no function to acquire the current speed of the mouse between each frame. What I've done so far to compute the delta_time as following void Display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glColor3f(1.0f, 0.0f, 0.0f); static int delta_t, current_t, previous_t; current_t = glutGet(GLUT_ELAPSED_TIME); delta_t = current_t - previous_t; std::cout << delta_t << std::endl; previous_t = current_t; glutSwapBuffers(); } Where should I start from here? (Note: I have to get the speed of the mouse because I'm modeling a system) Edit: Based on the above code, delta_time fluctuates so much 34 19 2 20 1 20 0 16 1 1 10 21 0 13 1 19 34 0 13 0 6 1 14 Why does this happen?

    Read the article

  • Can we use Google Earth images by applying our Unity3D mesh ?

    - by Jake M
    We are developing a commercial app for iOS and Android. The app will display development plans(architectural drawings) in a real world 3D environment. The app will work by creating a Unity3D mesh, applies a google earth image as the texture then draws out 3d lines(architectural drawings) over the Unity Mesh. Question: We are unsure if this is allowed under Google's terms and agreements? See quoted text below. In the bold text below its a little vague whether what we are doing(explained above) is violating their terms and agreements. What do you think? Does anyone know how we contact google to ask them? You may not mass download or use bulk feeds of any Content, including but not limited to extracting numerical latitude or longitude coordinates, geocoding, text-based directions, imagery, visible map data, or Places data (including business listings) for use in other applications. You also may not trace Google Maps or Earth as the basis for tracing your own maps or geographic materials. For full details, please read section 10.3.1 of the Maps/Earth API Terms of Service. Does anyone have any advice/experience dealing with this stuff?

    Read the article

< Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >