Search Results

Search found 2032 results on 82 pages for 'legacy systeme'.

Page 22/82 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Installing Ubuntu on Asus G75VW (UEFI)

    - by user101653
    You all are my last hope... help! I bought an Asus G75VW from Best Buy. It has the new UEFI BIOS instead of the old style BIOS (1980's) and has Windows 8 preinstalled. I cannot get the G75VW to install Ubuntu 12.10 in EFI mode. I did get Ubuntu to load if I changed the BIOS to CSM and the computer sees and installs Ubuntu in "legacy mode". I attempted boot repair, and Ubuntu will load after 1 minute but as legacy BIOS only. If I changed the BIOS to UEFI "Binary is whitelisted" is displayed and I get a purple screen. My goal... keep my preinstalled Windows 8 on internal drive bay 1 and install Ubuntu 12.10 on internal drive bay 2... and somehow make a choice on which to choose. I am at a loss. I am a software programmer, but I am very bad at understanding BIOS and partitioning. Any ideas? Has anyone done what I want to do. This is a full second day on my "issue"! If I cannot get Ubuntu installed, I'm returning the laptop. And "wait" until these obstacles of UEFI/EFI and properly handled to allow people to load EFI based Ubuntu without a hitch. Thanks, Dave

    Read the article

  • What is the best approach to solve a factory method problem which has to be an instance?

    - by Iago
    I have to add new funcionality in a web service legacy project and I'm thinking what is the best approach for a concrete situation. The web service is simple: It receives a XML file, unmarshalling, generates response's objects, marshalling and finally it sends the response as a XML file. For every XML files received, the web service always responds with the same XML structure. What I have to do is to generate a different XML file according to the XML received. So I have a controller class which has all marshalling/unmarshalling operations, but this controller class has to be an instance. Depending on XML received I need some marshalling methods or others. Trying to make few changes on legacy source, what is the best approach? My first approach was to do a factory method pattern with the controller class, but this class has to be an instance. I want to keep, as far as it goes, this structure: classController.doMarshalling(); I think this one is a bit smelly: if(XMLReceived.isTypeOne()) classController.doMarshallingOne(); else if(XMLReceived.isTypeTwo()) classController.doMarshallingTwo(); else if(XMLReceived.isTypeThree()) classController.doMarshallingThree(); else if ... I hope my question is well understood

    Read the article

  • windows 8 + Ubuntu dual boot

    - by Jack Yuan
    I installed Ubuntu 13.04 on Windows 8. Yes I can access both of them, but the process is kind of long. In BIOS, EFI is for Windows 8, legacy support is for Ubuntu. If I choose EFI first, the startup just go straight to Win8 without offering me a choice. If I choose legacy first, the starup will offer me a choice between win8 and ubuntu. But I can only choose Ubuntu. If i choose win8, there will be a mistake(file missing under configuration). That is to say, every time i wanna switch to another OS, I have to go into BIOS and change the priority settings. I heard something about secure boot might be the cause of this situation. But the thing is that there is not even an option called "secure boot" in my BIOS, which means i cannot disable it. All I want is that an option menu appears everytime i turn on my computer so i can easily choose what OS I want for today. Can anyone help me plz? Thank you very much!!

    Read the article

  • No Unity after ubuntu 12.10 upgrade

    - by Aivaras
    After I upgraded to Ubuntu 12.10 from 12.04, there was low graphics and no Unity. Just the mouse and the wallpaper. So, I got into the terminal via Ctrl+Alt+T, launched Chrome and searched for a solution. As a result, I tried this: sudo sh amd-driver-installer-12.6-legacy-x86.x86_64.run It did not work. Then I tried this: sudo add-apt-repository ppa:makson96/fglrx sudo apt-get update sudo apt-get upgrade sudo apt-get install fglrx-legacy It did not work too. I removed the repository, got back the the xorg version to 1.13, and tried this: sudo sh /usr/share/ati/fglrx-uninstall.sh sudo apt-get remove --purge fglrx fglRx_* fglrx-amdcccle* fglrx-dev* xorg-driver-fglrx sudo apt-get remove --purge xserver-xorg-video-ati xserver-xorg-video-radeon sudo apt-get install xserver-xorg-video-ati sudo apt-get install --reinstall libgl1-mesa-glx libgl1-mesa-dri xserver-xorg-core It did return the screen resolution back, but still no Unity. Is there something what could I do? My graphics card is: lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV620 [Mobility Radeon HD 3400 Series]

    Read the article

  • Le Logiciel Libre – Omniprésent dans le secteur public

    - by gravax
    NOTE : Cet article a servi de base à du contenu publié en Juin 2011 dans le magazine Acteurs Publics. Créé il y a plusieurs décennies déjà, pour répondre à un besoin de partage de savoir, et de compétences, le Logiciel Libre existe sous plusieurs appellations, à l'origine anglo-saxonnes, dont « Free Software » et « Open Source » sont les plus utilisées. En Anglais, le mot « Free » pouvant signifier à la fois libre et gratuit, cela a créé une certaine confusion qui n'existe pas en Français avec le mot « libre ». Du coup, on voit souvent l’acronyme FOSS ou FLOSS, pour « Free, Libre, Open Source Software » afin d'éliminer l’ambiguïté. De nos jours, dans le secteur public, le logiciel libre est, depuis, devenu omniprésent. Il répond à plusieurs besoins critiques dont le contrôle des coûts, le choix (de partenaire, de logiciel, de fonctionnalités), la liberté de pouvoir modifier les applications pour les adapter à ses propres besoins, la sécurité provenant du fait que de nombreux développeurs et utilisateurs ont pu contrôler la qualité du code. Un autre aspect très présent dans les logiciels libres et l'adhérence quasi-systématique aux standards de l'industrie, qui garantit une intégration simple et facile au système d'information existant. Il y a cependant des éléments à prendre en compte lors des choix de logiciels libres stratégiques. Si l'aspect coûts est clairement un élément de choix qui peut conduire aux logiciels libres, il est principalement dû au fait qu'un logiciel libre existe souvent en version gratuite, librement téléchargeable. Mais ceci n'est que le le sommet de l'iceberg. Lors de la mise en production de logiciels il va falloir s'entourer de services dont l'intégration, où les possibilités de choix d'un partenaire seront d'autant plus grandes que le logiciel choisi est populaire et connu, ce qui conduira à des coups tirés vers le bas grâce à une concurrence saine. Mais il faudra aussi prévoir le support technique. La encore, la popularité du logiciel choisi augmentera la palette de prestataires de support possible. Le choix devra se faire suivant des critères très solides, et en particulier la capacité à s'engager sur des niveaux de service, la disponibilité 24 heures sur 24, 7 jours sur 7 (le pays ne s’arrête pas de fonctionner le week-end ou la nuit), et, éventuellement, la couverture géographique correspondant aux métiers que l'on exerce (un pays comme la France couvrant avec ses DOM et ses TOM une grande partie des fuseaux horaires et zones géographiques de la planète). La plus part des services publics, que ce soit éducation, santé, ou gouvernement, utilisent déjà des logiciels libres. On les retrouve coté infrastructure, avec des produits comme la base de données MySQL, fortement appréciée dans le monde de l'éducation pour construire des plate-formes d'e-éducation en conjonction avec d'autres produits libres tels Moodle, ou GlassFish, le serveur d'applications très prisé des développeurs pour son adhérence au standard Java EE version 6 et sa simplicité de mise-en-œuvre. Linux est extrêmement présent comme système d'exploitation libre dans le datacenter, mais aussi sur le poste de travail. On retrouve des outils de virtualisation tels Oracle VM, issu de Xen, dans le datacenter, et VirtualBox sur le poste du développeur. Avec une telle palette de solutions et d'outils dans le monde du Logiciel libre, Oracle se apporte au secteur public des réponses ciblées, efficaces, aux besoins du marché, y compris en matière de support technique et qualité de service associée.

    Read the article

  • How to implement RSA-CBC?(I have uploaded the request document)

    - by tq0fqeu
    I don't konw more about cipher, I just want to implement RSA-CBC which maybe mean that the result of RSA encrypt in CBC mode, and I have implemented RSA. any code languages will be ok, java will be appreciated thx I copy the request as follow(maybe has spelling wrong), but that's French I don't konw that: Pr´esentation du mini-projet Le but du mini-projet est d’impl´ementer une version ´el´ementaire du chi?rement d’un bloc par RSA et d’inclure cette primitive dans un systeme de chi?rement par bloc avec chaˆinage de blocs et IV (Initial Vector ) al´eatoire. Dans ce systeme, un texte clair (`a chi?rer) est d´ecompos´e en blocs de taille t (?x´ee par l’utilisa- teur), chaque bloc (clair) est chi?r´e par RSA en un bloc crypt´e de mˆeme taille, puis le cryptogramme associ´e au texte clair initial est obtenu en chaˆinant les blocs crypt´es par la m´ethode CBC (cipher- block chaining) d´ecrite dans le cours (voir poly “Block Ciphers”) Votre programme devra demander a l’utilisateur la taille t, puis, apres g´en´eration des cl´es publique et priv´ee, lui proposer de chi?rer ou d´echi?rer un (court) ?chier ASCII. Il est indispensable que votre programme soit au moins capable de traiter le cas (tres peu r´ealiste du point de vue de la s´ecurit´e) t = 32. Pour les traiter des blocs plus grands, il vous faudra impl´ementer des routines d’arithm´etique multi-pr´ecision ; pour cela, je vous conseille de faire appela des bibliotheques libres comme GMP (GNU Multiprecision Library). Pour la g´en´eration al´eatoire des nombres premiers p et q, vous pouvez ´egalement faire appela des bibliotheques sp´ecialis´ees,a condition de me donner toutes les pr´ecisions n´ecessaires. Vous devez m’envoyer (avant une date qui reste a ?xer)a mon adresse ´electronique ([email protected]) un courriel (sujet : [MI1-crypto] : devoir, corps du message : les noms des ´etudiants ayant travaill´e sur le mini-projet) auquel sera attach´e un dossier compress´e regroupant vos sources C ou Java comment´ees, votre programme ex´ecutable, et un ?chier texte ou PDF donnant toutes les pr´ecisions sur les biblioth`eques utilis´ees, vos choix algorithmiques et d’impl´ementation, et les raisons de ces choix (complexit´e algorithmique, robus- tesse, facilit´e d’impl´ementation, etc.). Vous pouvez travailler en binˆome ou en trinˆome, mais je serai nettement plus exigeant avec les trinˆomes I have uploaded the request at http://uploading.com/files/22emmm6b/enonce_projet.pdf/ thx

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Safely delete a TFS branch project

    - by Codesleuth
    I'm currently reorganising our TFS source control for a very large set of solutions, and I've done this successfully so far. I have a problem at the moment where I need to delete a legacy "Release Branch" TFS project that was branched for the old structure, and is no-longer required since I now host a release branch within the new structure. This is an example of how the source control now looks after moving everything: $/Source Project /Trunk /[Projects] /Release /[Projects] $/Release Branch Project /[Projects] /[Other legacy stuff] So far I've found information that says: tf delete /lock:checkout /recursive TestMain to delete a branch. TfsDeleteProject to delete a project tf delete seems to be only relevant when I need to delete a branch that is within the same project as the trunk, and TfsDeleteProject doesn't seem like it will delete the branch association from the source project (I hope I'm wrong, see below). Can someone tell me if the above will work, and in what order I should perform them in, to successfully delete the TFS $/Release Branch Project while also deleting the branch association (from right-click $/Source Project - Properties - Branches)?

    Read the article

  • POS For .NET: Failed to set property properties of PosPrinter.

    - by StreamT
    I can't set properties of PosPritnter class. For example PageModeStation, PageModeVerticalPosition, PageModePrintArea etc. PosPrinter posPrinter = (PosPrinter)posExplorer.CreateInstance(posPrinterInfo); posPrinter.Open(); posPrinter.Claim(1000); posPrinter.DeviceEnabled = true; posPrinter.PageModeVerticalPosition = 10; //<--- Exception thrown: Failed to set property PageModeVerticalPosition Exception details: Microsoft.PointOfService.PosControlException was unhandled Message="Failed to set property PageModeVerticalPosition." Source="Microsoft.PointOfService" ErrorCodeExtended=0 StackTrace: at Microsoft.PointOfService.Legacy.LegacyProxy.SetProperty(String propertyName, Object propertyValue) at Microsoft.PointOfService.Legacy.LegacyPosPrinter.set_PageModeVerticalPosition(Int32 value) .... Any suggestions?

    Read the article

  • GetWindowPlacement/SetWindowPlacement not working in WinForms for high DPI

    - by RandomEngy
    I've got a legacy WinForms app and I want to save the window position and size across sessions. I've been using GetWindowPlacement and SetWindowPlacement during the FormClosing and Load events. The problem I'm getting is that at higher DPI settings (Such as Medium, size at 125%) the sizes keep inflating. I'll call SetWindowPlacement on it with a certain size, but when GetWindowPlacement is called, those numbers come back 25% bigger, even though the window was the same size all along. The same sort of problem exists when saving the size of a resizable element within the form. Now this works fine if I create a new WinForms project: The size stays stable even when running at the higher DPI. I'm guessing there's some legacy setting in the bowels of the project or some arcane Form setting that's messing it up, but I can't find out where. I've called IsProcessDPIAware on both projects and both are true. Does anyone know what might be causing this?

    Read the article

  • Web framework for an application utilizing existing database?

    - by tputkonen
    A legacy web application written using PHP and utilizing MySql database needs to be rewritten completely. However, the existing database structure must not be changed at all. I'm looking for suggestions on which framework would be most suitable for this task? Language candidates are Python, PHP, Ruby and Java. According to many sources it might be challenging to utilize rails effectively with existing database. Also I have not found a way to automatically generate models out of the database. With Django it's very easy to generate models automatically. However I'd appreciate first hand experience on its suitability to work with legacy DBs. Also I appreciate suggestions of other frameworks worth considering.

    Read the article

  • How to Implement Loose Coupling with a SOA Architecture

    - by Brian
    I've been doing a lot of research lately about SOA and ESB's etc. I'm working on redesigning some legacy systems at work now and would like to build it with more of a SOA architecture than it currently has. We use these services in about 5 of our websites and one of the biggest problems we have right now with our legacy system is that almost all the time when we make bug fixes or updates we need to re-deploy our 5 websites which can be a quite time consuming process. My goal is to make the interfaces between services loosely coupled so that changes can be made without having to re-deploy all the dependent services and websites. I need the ability to extend an already existing service interface without breaking or updating any of its dependencies. Have any of you encountered this problem before? How did you solve it?

    Read the article

  • Communicating between PHP and Java using ActiveMQ/Stomp

    - by scompt.com
    Background I have two services that need to communicate with each other over a message queue. One is a legacy service written in PHP and the other is in Java. Sooner than later, the PHP service will be rewritten in Java. The current way they communicate with each other is to write to a shared database, which the other service polls. This is what I'm trying to get away from and replace with a message queue. Problem The communication I'm working on right now is from the PHP service to the Java service. It needs to send a relatively complex object (strings and and integers and lists and maps of strings and integers). Ideally, the solution would be workable in PHP and ideal in Java, as that's going to be the legacy of this project. Possible Solutions 1.

    Read the article

  • Terracotta With Hibernate and EHCache

    - by Joe Biron
    Head swimming with the product name soup at http://www.terracotta.org. Need someone to help clarify what I need. Background: app has some "legacy" persistence code that does not use Hibernate, but has a home-grown cache implementation. New entities are Hibernate enabled. What I want: to use Terracotta for Hibernate 2nd level cache. I think I then want to slide out the home-grown cache impl and slide in ehcache (very similar semantically to home-grown version) - obviously I want Terracotta to back that EHCache as well. Confused with: Will I be telling Hibernate that ehcache is it's cache provider, then configure ehcache to use terracotta? So (hibernate | legacy-persistence)- ehcache - terracotta Am I on the right track? Forgive the newb question but the terracotta.org site really confuses me since so much of it it trying to sell me the commercial varieties.

    Read the article

  • Alternate datasource for django model?

    - by slypete
    I'm trying to seamlessly integrate some legacy data into a django application. I would like to know if it's possible to use an alternate datasource for a django model. For example, can I contact a server to populate a list of a model? The server would not be SQL based at all. Instead it uses some proprietary tcp based protocol. Copying the data is not an option, as the legacy application will continue to be used for some time. Would a custom manager allow me to do this? This model should behave just like any other django model. It should even pluggable to the admin interface. What do you think? Thanks, Pete

    Read the article

  • The .NET ActiveX component with WPF content can't be loaded by non MFC app

    - by lonelyflyer
    I have a legacy delphi program and want to add some content implemented with WPF. So I encapsulate the WPF control with a .NET/ActiveX interop technology. That means something like: [ComRegisterFunction()] public static void RegisterClass(string key); [ComUnregisterFunction()] public static void UnregisterClass(string key); The activeX component is a WinForms User Control and the WPF materials are attached to an ElemenHost in this User Control. It works fine if the host app of this ActiveX is a MFC program even without /clr switch. But my legacy app is a delphi program, and it always throw a stackoverflow exception within the constructor of my WPF user control as the program be started. I have no clue, Google is no help. and it has puzzled me for days.

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • SSIS - Update flag of selected rows from more than one table

    - by Rob Bowman
    Hi I have a SSIS package that copies data from table A to table B and sets a flag in table A so that the same data is not copied subsequently. This works great by using the following as the SQL command text on the ADO Net Source object: update transfer set ProcessDateTimeStamp = GetDate(), LastUpdatedBy = 'legacy processed' output inserted.* where LastUpdatedBy = 'legacy' and ProcessDateTimeStamp is not null The problem I have is that I need to run a similar data copy but from two sources table, joined on a primary / foreign key - select from table A join table B update flag in table A. I don't think I can use the technique above because I don't know where I'd put the join! Is there another way around this problem? Thanks Rob.

    Read the article

  • In Django, how to create tables from an SQL file when syncdb is run

    - by Sidney
    Hi, How do I make syncdb execute SQL queries (for table creation) defined by me, rather then generating tables automatically. I'm looking for this solution as some particular models in my app represent SQL-table-views for a legacy-database table. So, I've created their SQL-views in my django-DB like this: CREATE VIEW legacy_series AS SELECT * FROM legacy.series; I have a reverse engineered model that represents the above view/legacytable. But whenever I run syncdb, I have to create all the views first by running sql scripts, otherwise syncdb simply creates tables for them (if a view is not found). How do I make syncdb run the above mentioned SQL?

    Read the article

  • Is it possible to have an out-of-process COM server where a separate O/S process is used for each ob

    - by Tom Williams
    I have a legacy C++ "solution engine" that I have already wrapped as an in-process COM object for use by client applications that only require a single "solution engine". However I now have a client application that requires multiple "solution engines". Unfortunately the underlying legacy code has enough global data, singletons and threading horrors that given available resources it isn't possible to have multiple instances of it in-process simultaneously. What I am hoping is that some kind soul can tell me of some COM magic where with the flip of a couple of registry settings it is possible to have a separate out-of-process COM server (separate operating system process) for each instance of the COM object requested. Am I in luck?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >