Search Results

Search found 8692 results on 348 pages for 'per magnusson'.

Page 184/348 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Data Center Modernization: Harness the power of Oracle Exalogic and Exadata with PeopleSoft

    - by Michelle Kimihira
    Author: Latha Krishnaswamy, Senior Manager, Exalogic Product Management   Allegis Group - a Hanover, MD-based global staffing company is the largest privately held staffing company in the United States with more than 10,000 internal employees and 90,000 contract employees. Allegis Group is a $6+ billion company, offering a full range of specialized staffing and recruiting solutions to clients in a wide range of industries.   The company processes about 133,000 paychecks per week, every week of the year. With 300 offices around the world and the hefty task of managing HR and payroll, the PeopleSoft system at Allegis  is a mission-critical application. The firm is in the midst of a data center modernization initiative. Part of that project meant moving the company's PeopleSoft applications (Financials and HR Modules as well as Custom Time & Expense module) to a converged infrastructure.     The company ran a proof of concept with four different converged architectures before deciding upon Exadata and Exalogic as the platform of choice.   Performance combined with High availability for running mission-critical payroll processes drove this decision.  During the testing on Exadata and Exalogic Allegis applied a particular (11-F) tax update in production environment. What job ran for roughly six hours completed in less than 1.5 hours. With additional tuning the second run of the Tax update 11-F reduced to 33 minutes - a 90% improvement!     Not only that, the move will help the company save money on middleware by consolidating use of Oracle licensing in a single platform.   Summary With a modern data center powered by Exalogic and Exadata to run mission-critical PeopleSoft HR and Financial Applications, Allegis is positioned to manage business growth and improve employee productivity. PeopleSoft applications run on engineered systems platform minimizing hardware and software integration risks. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Trying to recover deleted Ubuntu partition

    - by user110984
    I made a mistake in logging into my 200 GB Ubuntu partition. I could not access Grub after that. Using a live CD I then ran Boot_Repair and apparently deleted the partition, I guess because I ran it from my 70 GB Windows partition. I can send the results of boot_info before that and of Boot_Repair. Then I ran TestDisk, which apparently found only dev/sda/ -320GB / 298 / GiB - WDC - WD3200BEVT-22A23T0 (Was there any more I could have done with TestDisk? I looked at the TestDisk_Step_By_Step example and found no way forward given that no other partitions turned up) I have run gpart and found this: /sda1 - 15 GB /sda2 - system reserved /sda3 - 70.15 GB /sda4 - extended 212.84 unallocated - 209.10 /sda5 - unknown 3.74 . I have been told I can recover the partition using gparted's Rescue start end command, but I don't know what to enter for start and end. [--EDIT: TestDisk Deeper Search stated that "the following partitions can't be recovered" and listed a 220-GB Linux partition 6 times. Then it stated that "The current number of heads per cylinder is 255 but the correct value may be 128" and I could try to change it in the Geometry menu (because apparently these are overlapping partitions) So should I do that?--]

    Read the article

  • Procedural content (settlement) generation

    - by instancedName
    I have, lets say, something like a homework or assignment to do. Roughly said I need to write an algorithm (pseudo code is not necessary, just in depth description) of procedure that would generate settlements, environment and a people to populate it with, as part of some larger world generation procedure. The genre of game is not specified, it could be any genre (rpg, strategy, colony simulation etc.) where interacting with large and extensive world is central to the game. Procedure should be called once per settlement. At the time of calling, world generation procedure makes geography, culture and history input available. Output should be map of the village and it's immediate area, and various potential additional information like myths, history, demographic facts etc. Bonus would be quest ant similar stuff, but that not really my focus at the moment. I will leave quality of the output for later when I actually dig little deeper into this topic. I am free to change parameters as long as I have strong explanation for doing so. Setting of the game is undetermined so I am free to use anything that I like the most. Ok, so my actual question is: Can anyone who has some experience in this field of game design recommend me some good literature, or point me in the direction where I should look/reed/study? I'm somewhat experienced game programmer, but I've never been into game design till now so any help will be great. I want to do this assignment as good as I can. As for deadline, it's not strictly set, but lets say I don't want it to take longer then few weeks, one month at worst case.

    Read the article

  • "unresolvable problem" error when upgrading from 12.04 to 14.04

    - by flyingfisch
    So I have solved this issue, but now I have another problem: An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. I am not upgrading to a pre-release version of Ubuntu and I am not running a pre-release either. I have unchecked all my 3rd-party packages using Ubuntu Software Manager, EditSoftware Sources... What else might be wrong? UPDATE After doing sudo update-manager -d and sudo apt-get update;sudo apt-get dist-upgrade as per JimB's post, and then running sudo do-release-upgrade, here what I get: Err http://extras.ubuntu.com trusty/main Translation-en Err http://extras.ubuntu.com trusty/main Translation-en_US Err http://extras.ubuntu.com trusty/main Translation-en Ign http://extras.ubuntu.com trusty/main Translation-en_US Ign http://extras.ubuntu.com trusty/main Translation-en Fetched 0 B in 0s (0 B/s) Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Aug 18 23:53:10 2014) === === Command terminated with exit status 1 (Mon Aug 18 23:53:10 2014) ===

    Read the article

  • configure open_basedir under Plesk

    - by cori
    This might be a question for ServerFault, and if it wasn't for the Plesk aspect I would ask it there to start with, so if it's better suited for over there let me know and I'll move it. I'm working on a dedicated server set up as a reseller account with Plesk to manage the domains and server configuration, and I need to add a directory to the local open_basedir configuration for a specific vhost. Given Plesk's normal methodology, I expected to be able to go to /var/www/vhost/{%DOMAINNAME%}/conf and modify vhost.conf and place a new value there, as I have successfully done with other configuration settings for this domain (turning safe_mode off, for instance). When I do so, however, the new setting doesn't take (per phpinfo();). If I edit httpd.conf (which the plesk configuration specifically says not to do in the notes at the top of httpd.conf) the setting takes. Is there something specific about the open_basdir setting that makes it not configurable in vhost.conf? How much trouble am I letting myself in for by editing the vhost-specific httpd.conf (I imagine is someone makes changes in the plesk web interface it might be overwritten, but what other risk is there)? Thanks!

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • Ubuntu 13.04 under Parallels Desktop - Black Desktop after X Windows Update

    - by Bob Reckhow
    I have been running Ubuntu 13.04 successfully on a MacBook Pro in a virtual machine in Parallels Desktop 9. Today (2013-10-17) after applying today's Ubuntu update, which included updates to X Windows, my Ubuntu 13.04 virtual machine launches, the launcher comes up, but the screen background is solid black, rather than the shaded orange colour of the default desktop background (and my desktop icons are "hidden behind this blackness", as well). I can launch applications from the launcher, and there is a very brief white flash on the screen, and then it returns to black. It's as if there is a "black blanket" covering the entire screen, so there is no way to interact with any application windows using the keyboard or mouse. The icons of the launcher are responsive to the mouse, so I can right-click and quit any application I have launched. But the rest of the screen is non-responsive to keyboard or mouse. This same behaviour happens with two different versions of Parallels Tools, so I am quite sure this is not a Parallels problem per se, although I could believe that it could be a paroblem with the interface between Parallels and this new updated X Windows code. Could anyone tell me what has happened, and how I might be able to fix this problem, so I can continue to use my Ubuntu 13.04 virtual machine? (I do have the option of reverting to a previous version of my virtual machine from before this update, but if possible I would prefer to keep my version of Ubuntu 13.04 up to date with the latest updates.) Thanks, Bob

    Read the article

  • Running CORN job on Ubuntu server for SugarCRM

    - by Logik
    i am pretty inexperienced in Linux.So be descriptive on your answer. My environment :Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So i did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. cd /var/www/crm; php -f cron.php /dev/null 2&1 Well i am creating contracts in my sugarCRM (AOS module) & i want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & i can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then i tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: 1) what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php /dev/null 2&1 is not making any sense to me. 2) How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • Dealing with bad/incomplete/unclear specifications?

    - by eagerMoose
    I'm working on a project where our dev team gets the specifications from the business part of the company. Both the business management and the IT management require estimates and deadline projections, as they should. The good thing is that estimates are mostly made by the actual developers who get to do the required features. The bad thing is that the specifications are usually either too simple (it turns out you're left with a lot of question marks over your head because a lot of information seems to be missing) or too complex(up to the point that you can't even visualize where everything would "fit" in the app). More often than not, the business part of the specs are either incomplete or unaware of what can and can't be done (given the previously implemented business logic). Dev team is given about a day per new spec to give an estimate and we do try to clear uncertainties, usually by meeting up with whoever did the spec. Most of the times it turns out that spec writers haven't really thought everything through, and it's usually only when we start designing and developing that we end up in trouble, as a lot of the spec seems to have holes. How do you deal with this? Are you generous on estimates in advance?

    Read the article

  • Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • Suggestions for a Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • How do references work in R?

    - by djechlin
    I'm finding R confusing because it has such a different notion of reference than I am used to in languages like C, Java, Javascript... Ruby, Python, C++, well, pretty much any language I have ever programmed in ever. So one thing I've noticed is variable names are not irrelevant when passing them to something else. The reference can be part of the data. e.g. per this tutorial a <- factor(c("A","A","B","A","B","B","C","A","C")) results <- table(a) Leads to $a showing up as an attribute as $dimnames$a. We've also witnessed that calling a function like a <- foo(alpha=1, beta=2) can create attributes in a of names alpha and beta, or it can assign or otherwise compute on 1 and 2 to properties already existing. (Not that there's a computer science distinction here - it just doesn't really happen in something like Javascript, unless you want to emulate it by passing in the object and use key=value there.) Functions like names(...) return lvalues that will affect the input of them. And the one that most got me is this. x <- c(3, 5, 1, 10, 12, 6) y = x[x <= 5] x[y] <- 0 is different from x <- c(3, 5, 1, 10, 12, 6) x[x <= 5] <- 0 Color me confused. Is there a consistent theory for what's going on here?

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • Wired connection problem through router

    - by tommypincha
    I'm having trouble to connect my Ubuntu 11.10 to internet through ethernet. I installed a router to get Wi-Fi and now (by a wired connection) I can't have internet with ubuntu (but I can with Windows 7). I see several attempts per minute of the network-manager to get a connection, but after a minute it stops trying. Here are a couple of outputs from key files: cat /etc/network/interfaces auto lo iface lo inet loopback and ifconfig -a eth0 Link encap:Ethernet HWaddr 00:16:76:e4:a6:e8 inet6 addr: fe80::216:76ff:fee4:a6e8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:117 dropped:0 overruns:0 frame:117 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:12221 (12.2 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:582 errors:0 dropped:0 overruns:0 frame:0 TX packets:582 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:46024 (46.0 KB) TX bytes:46024 (46.0 KB) I tried reconnecting the modem and the router and reconnecting the ethernet cable but nothing... I tried other solutions from other posts (this one has a similar issue Wired connection not working) but again nothing. My IP is dynamic. A couple of things I see and did: I see no inet addr, only inet6. I ignored ipv6 from the internet connections, and restarted the network-manager service and nothing. A difference with the post I mentioned is the RX packets with errors I have, is this a clue of the problem? Any help would be appreciated, Thanks!

    Read the article

  • Turning a problem into a data

    - by Fogmeister
    OK, I have an app that I'm creating but I'm just really not sure about how to approach the problem. The idea is fairly simple. I'm just not sure how to wrap it in a data model (or even if I should). TBH I feel like I'm making it more complicated than it needs to be. How it works. The app will have circles along the top in a row that need to be connected to circles along the bottom in a row. 10 circles at the top. 10 at the bottom. One connection per pair of dots. Anyway, I can get the dots to connect I'm just not sure how to wrap it in a data model so that I can analyse what has been connected and see if it is right or not. The circles will be questions and answers. I can make an array of question objects with a question and answer properties. I can then display these as the dot pairs. I'm just not sure how to record which questions have been connected to which answers. It is valid for a user to connect a wrong answer as they all get checked at the end. I was thinking of using SpriteKit but this isn't a restriction. I could use UIKit or something else. TBH, this question is fairly language free as I'm just after a way of modelling it.

    Read the article

  • Eclipse Java Code Formatter in NetBeans Plugin Manager

    - by Geertjan
    Great news for Eclipse refugees everywhere. Benno Markiewicz forked the Eclipse formatter plugin that I blogged about sometime ago (here and here)... and he fixed many bugs, while also adding new features. It's a handy plugin when you're (a) switching from Eclipse to NetBeans and want to continue using your old formatting rules and (b) working in a polyglot IDE team, i.e., now the formatting rules defined in Eclipse can be imported into NetBeans IDE and everyone will happily be able to conform to the same set of formatting standards. And now you can get it directly from Tools | Plugins in NetBeans IDE 7.4: News from Benno on the plugin, received from him today: The plugin is verified by the NetBeans community and available in the Plugin Manager in NetBeans IDE 7.4 (as shown above) and also at the NetBeans Plugin Portal here, where you can also read quite some info about the plugin:  http://plugins.netbeans.org/plugin/50877/eclipse-code-formatter-for-java The issue with empty undo buffer was solved with the help of junichi11: https://github.com/markiewb/eclipsecodeformatter_for_netbeans/issues/18 The issue with the lost breakpoints remains unsolved and there was no further feedback. That is the main reason why the save action isn't activated by default. See also the open known issues at https://github.com/markiewb/eclipsecodeformatter_for_netbeans/issues?state=open Features are as follows:  Global configuration and project specific configuration.  On save action, which is disabled by default. Show the used formatter as a notification, which is enabled by default.  Finally, Benno testifies to the usefulness, stability, and reliability of the plugin: I use the Eclipse formatter provided by this plugin every day at work. Before I commit, I format the sources. It works and that's it. I am pleased with it. Here's where the Eclipse formatter is defined globally in Tools | Options: And here is per-project configuration, i.e., use the Project Properties dialog of any project to override the global settings:  Interested to hear from anyone who tries the plugin and has any feedback of any kind! 

    Read the article

  • What's the best way to manage error logging for exceptions?

    - by Peter Boughton
    Introduction If an error occurs on a website or system, it is of course useful to log it, and show the user a polite message with a reference code for the error. And if you have lots of systems, you don't want this information dotted around - it is good to have a single centralised place for it. At the simplest level, all that's needed is an incrementing id and a serialized dump of the error details. (And possibly the "centralised place" being an email inbox.) At the other end of the spectrum is perhaps a fully normalised database that also allows you to press a button and see a graph of errors per day, or identifying what the most common type of error on system X is, whether server A has more database connection errors than server B, and so on. What I'm referring to here is logging code-level errors/exceptions by a remote system - not "human-based" issue tracking, such as done with Jira,Trac,etc. Questions I'm looking for thoughts from developers who have used this type of system, specifically with regards to: What are essential features you couldn't do without? What are good to have features that really save you time? What features might seem a good idea, but aren't actually that useful? For example, I'd say a "show duplicates" function that identifies multiple occurrence of an error (without worrying about 'unimportant' details that might differ) is pretty essential. A button to "create an issue in [Jira/etc] for this error" sounds like a good time-saver. Just to re-iterate, what I'm after is practical experiences from people that have used such systems, preferably backed-up with why a feature is awesome/terrible. (If you're going to theorise anyway, at the very least mark your answer as such.)

    Read the article

  • Confusion with floats converted into ints during collision detection

    - by TheBroodian
    So in designing a 2D platformer, I decided that I should be using a Vector2 to track the world location of my world objects to retain some sub-pixel precision for slow-moving objects and other such subtle nuances, yet representing their bodies with Rectangles, because as far as collision detection and resolution is concerned, I don't need sub-pixel precision. I thought that the following line of thought would work smoothly... Vector2 wrldLocation; Point WorldLocation; Rectangle collisionRectangle; public void Update(GameTime gameTime) { Vector2 moveAmount = velocity * (float)gameTime.ElapsedGameTime.TotalSeconds wrldLocation += moveAmount; WorldLocation = new Point((int)wrldLocation.X, (int)wrldLocation.Y); collisionRectangle = new Rectangle(WorldLocation.X, WorldLocation.Y, genericWidth, genericHeight); } and I guess in theory it sort of works, until I try to use it in conjunction with my collision detection, which works by using Rectangle.Offset() to project where collisionRectangle would supposedly end up after applying moveAmount to it, and if a collision is found, finding the intersection and subtracting the difference between the two intersecting sides to the given moveAmount, which would theoretically give a corrected moveAmount to apply to the object's world location that would prevent it from passing through walls and such. The issue here is that Rectangle.Offset() only accepts ints, and so I'm not really receiving an accurate adjustment to moveAmount for a Vector2. If I leave out wrldLocation from my previous example, and just use WorldLocation to keep track of my object's location, everything works smoothly, but then obviously if my object is being given velocities less than 1 pixel per update, then the velocity value may as well be 0, which I feel further down the line I may regret. Does anybody have any suggestions about how I might go about resolving this?

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • WebLogic 12c training in Dutch–May 10th & 11th 2012 Utrecht Netherlands

    - by JuergenKress
    Axis into ICT offers you the opportunity to increase your skills. We are organizing ‘Bring Your Own Laptop Knowledge Sessions‘. In a small group of up to 8 people we are going discuss all the practical aspects of WebLogic Server you ever wanted to know. This is not a standard course, but a training where applying the material in practice is of importance. All participants will receive their own virtual machine, which offers you to ability to continue afterwards with your own practice environment. By keeping the groups small we create an informal atmosphere with plenty of room for all your questions or to even discuss your specific situation. The approach is highly interactive; after all you are attending to increase your knowledge. Topics that will be covered Introduction JVM Tuning Deployment Diagnostic Framework Class Loading Security Configure Resources Clustering Scripting Register for this session You are interested in the ‘Bring Your Own Laptop Knowledge Sessions: WebLogic 12c’? Register for one of two possibilities by using the form below. After registration you will receive a confirmation by e-mail. Training will be in Dutch! Date: May 10 & 11 2012 from 09:30 – 17:00 hrs Location: Axis into ICT Headquarters (Utrecht) Expenses: € 700,- per person (VAT excluded) For registration and details please visit our website. Want to promote your event? Let us know Twitter @wlscommunity! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Axis,education,WebLogic training,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress,WebLogic 12c

    Read the article

  • Displaying a Paged Grid of Data in ASP.NET MVC

    This article demonstrates how to display a paged grid of data in an ASP.NET MVC application and builds upon the work done in two earlier articles: Displaying a Grid of Data in ASP.NET MVC and Sorting a Grid of Data in ASP.NET MVC. Displaying a Grid of Data in ASP.NET MVC started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). Sorting a Grid of Data in ASP.NET MVC enhanced the application by adding a view-specific Model (ProductGridModel) that provided the View with the sorted collection of products to display along with sort-related information, such as the name of the database column the products were sorted by and whether the products were sorted in ascending or descending order. The Sorting a Grid of Data in ASP.NET MVC article also walked through creating a partial view to render the grid's header row so that each column header was a link that, when clicked, sorted the grid by that column. In this article we enhance the view-specific Model (ProductGridModel) to include paging-related information to include the current page being viewed, how many records to show per page, and how many total records are being paged through. Next, we create an action in the Controller that efficiently retrieves the appropriate subset of records to display and then complete the exercise by building a View that displays the subset of records and includes a paging interface that allows the user to step to the next or previous page, or to jump to a particular page number, we create and use a partial view that displays a numeric paging interface Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • crippling repeating "pciehp card not present" notifications

    - by Nanne
    When using ubuntu (12.04, both installed and on a live usb) I get a lot of these messages: pciehp 0000:00:1c.5:pcie04: Card not present on Slot(37) pciehp 0000:00:1c.5:pcie04: Card present on Slot(37) And with a lot I mean about 20 per second. This has a crippling effect, and I would like to get rid of it :) The computer is a packard bell easynote BG48-U-100 DC. I tip I picked up from some fedora/redhat error here was to look at lspci -vnn. I have pasted the part about "00:1c.5" here: http://pastebin.com/0sfsiqW2 For what good it may do, here is the lsmod of my machine: http://pastebin.com/DQZy1kAL From that first pastebin I think to conclude that it has to do with the module shpchp, which seems to me (aka: google) to have something to do with ACPI. That's as far as I've come in disecting this. Can anyone help me along further? What can I do, check etc? I did see this topic but my intentions are not to surpress the error message: I know how to do this (from that topic ;) ), but I'm looking for a real sollution. Finding the problem on the internet does suspect me to believe it is neither an ubuntu specific nor a packard-bell specific problem.If you google the problem it seems that is present on several other distribution/hardware combo's as well, and it looks like the advice is to remove one of the drivers? I have no clue as to which driver I should look at and and what would be the effect of just removing it. I have seen this topic which is old-ish, but describes my problem and is about a similar computer. The solution in this topic was to compile a new kernel using a spanish guide, which seems a bit extreme to me, so I'm kinda hoping for a better solution than that.

    Read the article

  • How to Deal with an out of touch "Project manager"

    - by Joe
    This "manager" is 70+ yrs old and a math genius. We were tasked with creating a web application. He loves SQL and stored procedures. He first created this in MS access. For the web app I had to take his DB migrate to SQL server. His first thought was to have a master stored procedure with a WAITFOR Handling requests from users. I eventually talked him out of that and use asp.net mvc. Then eventually use the asp.net membership. Now the web app is a mostly handles requests from the pages that is passed to stored procedures. It is all stored procedure driven. The business logic as well. Now we are having an one open DB connection per user logged in plus 1. I use linq to sql to check 2 tables and return the values thats it period. So 25 users is a load. He complains why my code is bad cause his test driver stored procedure simulates over 100 users with no issue. What are the best arguments for not having the business logic not all in stored procedures?? How should I deal with this?? I am giving an abbreviated story of course. He is a genius part owner of the company all the other owners trust him because he is a genius. and quoting -"He gets things done. old school".

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >