Search Results

Search found 21717 results on 869 pages for 'setup versions'.

Page 169/869 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • How to format HDD with MHDD Tool.

    - by PP
    I have copied MHDD application on a bootable CD and i wan to use it to low level format my HDD which had some bad sectors on it. so how can mark these badsecotrs using MHDD application? Pluse can i partition/format my HDD using MHDD because after low level format i want to setup window xp on that drive. can some one explain me step by step procedure to how to lovel format driven and partition/format it and then setup xp on that. Link to MHDD app. http://hddguru.com/content/en/software/2005.10.02-MHDD/ Thanks, PP.

    Read the article

  • git, maven and jenkins - versioning, dev and release builds workflow

    - by varesa
    What is the preferred way to do the following with git, maven and jenkins: I am developping an application, which I would like to maintain "dev" and "release" branches. I would like jenkins to build both. It could be so that the release-artifacts would have versions like 1.5.2 and the dev-builds would just be 0.0.1-SNAPSHOTs. I would like to not have to have 2 different pom.xml files. I looked into profiles, but they don't seem to be able to change artifact versions. One way I looked at could be adding a 'qualifier' to the test-builds. Of course I could just rename the file, because the real artifact-information on this is not important, because the app is a standalone one. What would be the preferred way to doing this? Or how would you do this?

    Read the article

  • Monitors - inches vs resolution

    - by Vnuk
    I'm currently moving away from living five years only on laptop to a desktop setup. I'm currently browsing for monitors and I've noticed something strange. On my laptop I have 1920x1200 on 17". To get the same resolution on a monitor I have to get Dell U2410 24" or Samsung SM2443NW 24". I do not need (or want) 7" more inches of screen, I just want the 1920x1200 resolution. Why is this setup (big resolution on less inches of screen) available on a laptop but not on a regular monitor? I'm setting this as a community wiki beacuse I think that there is no right answer here...

    Read the article

  • How to restore missing calendar data from Lightning/Thunderbird

    - by dev9
    Today out of nowhere all my events and tasks disappeared from my Thunderbird. However, I have a full backup of .thunderbird folder. How can I restore my calendar data? I reverted these files to previous versions: /home/me/.thunderbird/xxx.default/calendar-data/local.sqlite /home/me/.thunderbird/xxx.default/prefs.js but I still cannot see any data in my Thunderbird. What else should I do?

    Read the article

  • AWS SSL Load Balancer

    - by Jay Francis
    OK, I am looking for some pointers. Basically I have a white-label app/site that will allow users to setup their own domain to use for their customer front-end. We have 2 dedicated servers and a load balancer. The problem is SSL, we were thinking about using AWS ELB to handle the SSL loadbalancing, but cant seem to figure out if it will properly handle it, it seems to be setup to work with EC2 instances, but we are using externally hosted servers via a loadbalancer. A blog post by AWS looks similar to what we need but it only seems to work with EC2 instances. http://aws.typepad.com/aws/2011/08/elastic-load-balancer-ssl-support-options.html Anyone had experience setting ELS SSL load balancers up to work with external servers?

    Read the article

  • Lightning Wallpaper Collection for Your Nexus 7

    - by Akemi Iwaya
    Lightning can be frightfully powerful and eerily beautiful at the same time, a force of nature that is not to be taken lightly. Harness the ‘power of nature’ by electrifying your Nexus 7′s screen with the first in our series of Lightning Wallpaper collections. Lightning Series 1 Note: Click on the pictures to view and download the full-size versions at their individual homepages. The images shown here are in thumbnail format.

    Read the article

  • Sonicwall NAT Policy Loopback

    - by John
    I have an issue and am pretty perplexed over it. I have a sonicwall and its setup with NAT polices and reflexive nat for an internal web server. That is, only 2 policies, no loopback policy, and the internal clients can access the web server by public ip no problems. Now, on another connection, another sonicwall, i have the exact same setup for another web server, with exact same policies (obviously different IP's) and the internal clients can't access the internal website by its public IP without creating the loopback policy. Maybe on the first one I've overlooked it, but I don't see any loopback what so ever and its working fine. My question is, does anyone know why the first one works like this but the second one needs the loopback policy? Thanks

    Read the article

  • The Best Ways to Make Use of an Idle Computer

    - by Lori Kaufman
    If you leave your computer on when you are not using it, there are ways you can put your computer to use when it’s sitting idle. It can do scientific research, backup your data, and even look for signs of extraterrestrial life. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Running PHP 5.2 FastCGI + Apache on CentOS 5 issue

    - by Goran
    I am trying to setup 2 versions of PHP on Centos 5.9 using this tutorial: http://linuxplayer.org/2011/05/intall-multiple-version-of-php-on-one-server. I have followed I have installed default 5.4.19, and I was trying to setup another 5.2.17 PHP version to be run with Fast CGI and I followed the second part completely. However, when I try to run http://web2.example.com it returns 500 error message. In the apache log there are only 2 lines that repeat: [notice] mod_fcgid: call /var/www/web2/index.php with wrapper /usr/local/php52/bin/fcgiwrapper.sh and [notice] mod_fcgid: process /var/www/web2/index.php(25250) exit(server exited), terminated by calling exit(), return code: 255 Please note that I had to add .php at the and of the FCGIWrapper because apache would not start without it: FCGIWrapper /usr/local/php52/bin/fcgiwrapper.sh .php Also please note that http://web1.example.com with PHP 5.4.19 is working absolutely fine. Please help. Thank you very much in advance.

    Read the article

  • Python 3.2 est disponible avec des librairies standards améliorées

    Python 3.2 est disponible Avec des librairies standards améliorées Mise à jour du 21/02/11 par Gordon Fowler Depuis le gel de la syntaxe de Python, la branche des versions 2.x ne se verra plus ajouter que des corrections de bugs. Les réelles nouveautés, elles, seront intégrées directement dans le branche 3.x. C'est donc avec un certain intérêt que les développeurs verront la sortie de Python 3.2, rendu disponible pendant le week-end. [IMG]http://ftp-developpez.com/gordon-fowler/python-logo.gif[/IMG] Les efforts des équipes de développement de Python ...

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

  • Oracle Global HR Cloud Implementation Training Can Help Meet Your Business Needs

    - by HCM-Oracle
    By Jim Vonick A key goal for the deployment of your Oracle Global HR Cloud applications is to accelerate the implementation and adoption of your applications, so that your business can start realizing all of the benefits that this rich solution offers.    Implementation team members need to have the skills and knowledge to ensure a smooth, rapid and successful implementation of your applications. During set-up, you want to optimize the configuration to best meet your business needs. In order to do this you need to understand the foundation and configuration options of your applications, so that decisions can be made during set-up that best align with your business.  To that end product level implementation training is recommended for Oracle Global HR Cloud deployments. Training For Implementation Team Members and Consultants Fusion Applications: HCM Security: Learn how to implement security for Oracle Fusion HCM applications by creating and customizing roles. You'll learn how to create security profiles to restrict data access, provision roles to users, create and manage user accounts, and verify security setup. Fusion Applications: HCM Global Human Resources: Learn how to set up your enterprise and workforce structures, how to perform functional tasks, and how to configure security for Global Human Resources data. Fusion Applications: HCM Compensation: Learn how to implement, configure, and use Oracle Fusion Compensation to manage base pay, individual compensation, workforce compensation, and total compensation statements. Fusion Applications: HCM Benefits: This course teaches you to implement, configure and manage Oracle Fusion Benefits, including how to implement benefit plans and programs.  Fusion Applications: HCM Payroll Implementation (US): This course provides implementation training for payroll managers or payroll administrators. Learn how to process payroll to ensure accurate setup results.  Learn More: See all Fusion HCM Training Jim Vonick is a Senior Product Manager with Oracle University focusing on training for Oracle Applications and Industry Solutions.

    Read the article

  • How do I get fan control working?

    - by RobinJ
    I know there something called fancontrol, that enables you to control the speed of your system's ventilation. I'd like to let my fans spin a bit faster as my laptop is heating up very easilly. All tutorials and stuff I've found are for old versions of Ubuntu and don't seem to be working anymore. Can anyone explain to me or give me a good link on how I can get it working on Ubuntu? Something different with the same effect is also fine.

    Read the article

  • IIS6 Permissions

    - by Gordon Carpenter-Thompson
    We have a set of IIS6 Jakarta/ASP.NET applications (implemented as virtual directories) on a machine without a domain. The directories all exist under the default website. We need to setup the permissions so that certain users can access only specific applications yet others users can access several of the applications. The way it's been setup previously has been to explicitly deny access to the users for every application except the ones that they are allowed to see. The problem is that the list of applications changes fairly often (for demos etc) and it's been known for the developers to forget to deny the old users access to the new applications which leads to security problems. This is all quite unmaintainable. Does anybody have any advice on this? Surely I can't be the only person to find this all a bit of a mess? Thanks

    Read the article

  • Problem with XeTeX (LaTeX) and system fonts

    - by mghg
    I have started to use an enterprise specific class for LaTeX, but have got a problem with usage system fonts in Ubuntu. The class uses the fontspec package, I have therefore been instructed to use XeTeX (i.e. the command xelatex instead of latex or pdflatex). However, the command xelatex testfile.tex results in the following message: ! Package xkeyval Error: `TeX' undefined in families `Ligatures'. See the xkeyval package documentation for explanation. Type H <return> for immediate help. ... l.61 \newfontfamily\headfont{Arial} ? The class has previously been used on Mac and Windows and the font setup is as follows: \newfontfamily\headfont{Arial} \newcommand\texthead[1]{\headfont #1} \setromanfont{Georgia} \setmainfont{Georgia} \setsansfont[Scale=MatchLowercase]{Verdana} It has been suggested that since XeTeX makes use of system fonts and the class file has worked flawlessly on Mac and Windows, the problem might be that Arial is not a name used in Ubuntu. I have tried to exchange Arial with Ubuntu Light in the setup code above, but that have not been any improvement. Any suggestions please on how to move forward?

    Read the article

  • Discover MAC address

    - by Kami
    I've to setup a bunch of server ! I need to discover their mac address with the following situation : MacBookPro >----------< Server I'm directly connected (not behind a router/switch) to the server. I've no clues about the ip address the server is using as default setup. I can't use a display connected to the server displaying its newtwork card config. How can I discover the MAC address of the server network card ? I'm looking for a command line tool. If something exists in MacPort it's also ok !

    Read the article

  • Cannot run setups from a vboxsvr mapped network drive on Windows 7 within VirtualBox

    - by Dimitri C.
    I'm trying to run an application setup by double-clicking the setup.exe from within Windows Explorer. The file is located on a mapped network drive, and I'm using Windows 7. This results in the following error message: The specified path does not exist. Check the path, and then try again. The workaround I found is to copy the installer to the main hard drive (c:) and run it from there; however, this is rather inconvenient. The same action did work on Windows 2000, Windows XP and Windows Vista. I have the impression that the problem only occurs with installers, as everything seemed to work fine with regular exe's. Is there anyone who can explain this odd behavior? Update: After some extended tests I noticed that the problem only occurs with a mapped drive of VirtualBox's "shared folders" (cf. vboxsvr). Mapping an SMB drive works fine.

    Read the article

  • redmine repository management

    - by Alex
    We are trying to setup a redmine installation for our group which should work with both SVN and Git repos. Since we want to keep the repos on the server and avoid the whole privileges and hosting mess (root access, local repos, ...), we want configure redmine to manage repo creation and destruction by itself. In short, redmine should create a repository automatically for a new project and delete it if the project is deleted, with no extra setup steps from our admin. So far I found reposman for SVN and redmine_git_hosting for Git, but I am unsure if match our requirements. Are these the tools we are looking for or is there any other alternative? Thank you

    Read the article

  • Install Mouse Driver on Windows 7 Professional 64 bit

    - by Soren
    I have a mouse that I absolutely love (been using them for years), A4Tech WOP-35. It has dual scrollers and 5 buttons, 3 of the buttons are programmable. I use them at work and at home. At work I am using Windows 7 Enterprise (32 bit), at home I am using Windows 7 Professional (64 bit). The drivers installed easily on my machine at work. Unfortunately, they will not install on my computer at home. When I double click on the Setup.exe, it asks me if I want to install it, and of course I click on "Yes", but nothing happens. When I say nothing happens, I mean nothing happens; it appears that it doesn't even try to install. The same thing happens when I right click on the setup.exe and select run as administrator. How can I get around this? I am guessing it is because I am running the 64 bit version of Windows.

    Read the article

  • Debugging / troubleshooting .inf driver install in Windows

    - by Jesus Cuenca
    I'm trying to install a graphics driver in a laptop with Windows 7 but the setup fails with a timeout. I checked the setup log but it does not include any details about the .inf installation. It only shows the error code, which I searched only to find what I already knew (timeout error, no more details) I also checked the .inf file, but it has hundreds of lines so it's hard to guess which step is failing. So I'm wondering, is there some procedure to debug a .inf file? Like, having a detailed log of what happened after executing each line of the .inf Thanks!

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • Application switcher is broken

    - by Byron Hawkins
    After a normal update of my Ubuntu 12.04 install last week, my application switcher has stopped working. I've tried all different settings in CompizConfig, including a variety of shortcut keys and both switcher versions ("Application Switcher" and "Static Application Switcher"). So far there has been no way to get any form of application switcher to appear on my screen. Can anyone give me an idea what might be wrong, or where I might look for more information? Thanks for your help.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >