Search Results

Search found 23127 results on 926 pages for 'based'.

Page 490/926 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Piping powershell messages to Write-EventLog

    - by Richard
    I have a powershell script that runs a custom cmdlet. It is run by Task Scheduler and I want to log what it does. This is my current crude version: Add-PsSnapIn PianolaCmdlets Write-EventLog -LogName "Windows Powershell" -Source "Powershell" -Message "Starting Update-EbuNumbers" -EventId 0 Get-ClubMembers -HasTemporaryEbuNumber -show all | Update-EbuNumbers -Verbose Write-EventLog -LogName "Windows Powershell" -Source "Powershell" -Message "Finished Update-EbuNumbers" -EventId 0 What I would like to do is log the output of my custom cmdlet. Ideally I'd like to create different types of event log entries based on whether it was a warning or a verbose message. Update: I don't want to log the return value of the commandlet. The Update-EbuMembers cmdlet does not return an object. I want to log any verbose messages written by WriteVerbose and I want to log errors created by ThrowTerminatingError.

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload? (This question was originally asked on SuperUser; feel free to migrate answers here if you know how).

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • How can I use `SetEnvIf` to clear an Apache2 environment variable?

    - by Jamie
    In my apache2 configuration I've got these lines: SetEnv log_everything # Create the environment variables based on access requests SetEnvIf Request_URI "^/orders/.*$" download_access !log_everything SetEnvIf Request_URI "^/download/.*$" download_access !log_everything SetEnvIf Request_URI "^/wg/.*$" wg_1x1_access !log_everything # Log the accesses using the generated environment variable as conditionals. CustomLog ${APACHE_LOG_DIR}/download.log combined env=download_access CustomLog ${APACHE_LOG_DIR}/wg.log combined env=wg_1x1_access RewriteEngine on RewriteRule "^/wg/.+$" "/wg/1x1.gif" ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined env=log_everything Which currently logs all the "download" and "orders" requests to "download.log" and "wg" requests to "wg.log", but everything is also going to access.log. How can I configure this so that "wg" and "download/orders" requests won't be duplicated in access.log?

    Read the article

  • configuring USB modem( Huawei EC156) in ubuntu 13.10

    - by user205427
    I am facing difficulty in installing my USB modem in Ubuntu 13.10. Contrary to what many have suggested,it does not get detected automatically, nor does setting a new connection help. USB Device is listed in lsusb, but not under network manager or Devices, it is detected as a CD-ROM, what I understood from the web was that usb-modeswitch can be used to switch it to a USB device. Even 'Enable Mobile Broadband' option is not shown in network manager. What was interesting is when I start laptop with windows 7 and use the USB modem and after that restart with Ubuntu, both Enable Broadband and the mobile broadband connection can be seen. Sadly, internet connection could not be installed. I tried using USB-modeswitch command as suggested somewhere, but it does not seem to work. Following is the message. Take all parameters from the command line * usb_modeswitch: handle USB devices with multiple modes * Version 2.0.1 (C) Josua Dietze 2013 * Based on libusb1/libusbx ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1505 HuaweiMode=1 NeedResponse=0 InquireDevice enabled (default) Look for default devices ... found USB ID 8087:0020 found USB ID 1d6b:0002 found USB ID 0461:4db6 found USB ID 12d1:1505 vendor ID matched product ID matched found USB ID 138a:0007 found USB ID 03f0:231d found USB ID 8087:0020 found USB ID 1d6b:0002 Found devices in default mode (1) Access device 005 on bus 001 Get the current device configuration ... OK, got current device configuration (1) Use interface number 0 Use endpoints 0x08 (out) and 0x87 (in) Inquire device details; driver will be detached ... Looking for active driver ... OK, driver detached INQUIRY message failed (error -9) USB description data (for identification) ------------------------- Manufacturer: HUA?WEI TECHNOLOGIES Product: HUAWEI Mobile Serial No.: ??????????????????? ------------------------- Send old Huawei control message ... -> Run lsusb to note any changes. Bye! I am stuck with this problem for 4 days now, any help would be appreciated

    Read the article

  • FTP Synchronization software for Mac or PC

    - by evanmcd
    Hi, I've been using FTP Synchronizer for awhile and generally have had pretty good results with it. But, I've just moved to a Mac full-time (at work as well as home now) so want to get a native client if I can. I've tried the only one that I've found - SuperFlexibleSynchronizer - but it crashed every time I loaded up an FTP to FTP synch attempt. The most important features to me are: 1) ability to synch with a large number of files (thousands), as I generally work on sites with large number of files. 2) FTP to FTP synch. This would be very helpful as I work with some CMS based sites for which users upload files while on staging and don't want to move files locally first before moving live. Thanks! Evan

    Read the article

  • Profit : August, 2012

    - by user462779
    August 2012 issue of Profit is now available online. Way back in 2003, I wrote my first feature for Profit. It was titled “Everything You Always Wanted to Know About Application Servers (But Were Afraid To Ask),” and it discussed “cutting-edge” technologies like portals and XML and the brand-new Java Platform, Enterprise Edition (Java EE; we’re now on Java EE 7). But despite the dated terms I used in my Profit debut, I noticed something in rereading that old story that has stayed constant: mid-tier technology is where innovative enterprise IT projects happen. It may have been XML in 2003, but it’s SOA in 2012. While preparing the August issue of Profit was more than just a stroll down memory lane for me, it has provided a nice bit of perspective about what changes and what doesn’t in this dynamic IT industry. Technologies continuously evolve—some become standard practice, some are revived or reinvented, and some are left by the wayside. But the drive to innovate and the desire to succeed are business principles that never go out of fashion. Also, be sure to check out the Profit JD Edwards Special Issue 2012 (PDF), featuring partner profiles, customer successes, and Oracle executive interviews. The Middleware Advantage Three ways a flexible, integrate software layer can deliver a competitive edge Playing to Win Electronic Arts’ superefficient hub processes millions of online gaming transactions every day. Adjustable Loans With Oracle Exadata, Reliance Commercial Finance keeps pace with India’s commercial loan market. Future Proof To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate. Spring Training Knowledge and communication help Jackson Hewitt’s Tim Bechtold get seasonal workers in top shape. Keeping Online Customers Happy Customers worldwide are comfortable with online service—but are companies meeting customers’ needs?

    Read the article

  • Why Does My iMac Keep Setting the Screen Brightness to 'Full'?

    - by TomB
    I have a 2 week old 24" iMac running Mac OS X 10.6. It is the primary monitor with the menu bar on the top of display. I have an external monitor as well, a 19" Viewsonic LCD. The LCD is set to the left side of the iMac, rotated 90 degrees CCW and has the Dock along the far left edge. When I restart the screen brightness on the iMac reverts to Full brightness. The Viewsonic LCD retains the setting I have for it. I am using a Mini DVI to DVI cable for the external display. I even tried setting my Huey Pro to do automatic screen adjustment based on ambient lighting but the iMac still goes to stun with a reboot. I am sure it is something dumb I have overlooked.

    Read the article

  • Collision with CCSprite

    - by Coder404
    I'm making an iOS app based off the code from here In the .m file of the tutorial is this: -(void)update:(ccTime)dt { NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *projectile in _projectiles) { CGRect projectileRect = CGRectMake( projectile.position.x - (projectile.contentSize.width/2), projectile.position.y - (projectile.contentSize.height/2), projectile.contentSize.width, projectile.contentSize.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *target in _targets) { CGRect targetRect = CGRectMake( target.position.x - (target.contentSize.width/2), target.position.y - (target.contentSize.height/2), target.contentSize.width, target.contentSize.height); if (CGRectIntersectsRect(projectileRect, targetRect)) { [targetsToDelete addObject:target]; } } for (CCSprite *target in targetsToDelete) { [_targets removeObject:target]; [self removeChild:target cleanup:YES]; } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:projectile]; } [targetsToDelete release]; } for (CCSprite *projectile in projectilesToDelete) { [_projectiles removeObject:projectile]; [self removeChild:projectile cleanup:YES]; } [projectilesToDelete release]; } I am trying to take away the projectiles and have the app know when the CCSprite "Player" and the targets collide. Could someone help me with this? Thanks

    Read the article

  • Hot Off the Press - Oracle Exadata: A Data Management Tipping Point

    - by kimberly.billings
    Advances in data-management architecture - including CPU, memory, storage, I/O, and the database - have been steady but piecemeal. In this report, Merv Adrian describes how Oracle Exadata not only provides the latest technology in each part of the data-management architecture, but also integrates them under the full control of one vendor with a unified approach to leveraging the full stack. He writes, "the real "secret sauce" of Oracle Exadata V2 is the way in which these technologies complement each other to deliver additional performance and scalability." Merv interviews two Exadata customers, Banco Transylvania and TUI Netherlands, and concludes that early indications are that Oracle Exadata is delivering on its promise of extreme performance and scalability. His recommendation to IT is to target corporate applications with the biggest potential for speed-based enhancement, and consider whether Oracle Exadata V2 can cost-effectively enable new ways to use these for competitive advantage. Read the full report. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Excel: making line charts so the line goes through all data points

    - by Mike
    Hi I've got data based on over 50+ years for various products. Unfortunately not all products have data for each year. I've created a line chart to show the movement (quantity sold) of these products over the years. It works well, except where the data points are too far apart i.e. 1965 and then 1975. For some reason there is no line. It's not perfect data because of the missing years, but I can live with that, I just want to see the trend, and not just sporadic dots; squares or crosses. Any help or links greatly appreciated. Mike

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • Procedural world generation oriented on gameplay features

    - by Richard Fabian
    In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical. Looking at world generation from this point of view, a landscape generator for a game (that is, not for the sake of scenery, but for the sake of gameplay) needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than the natural way cities arise, scattered on the land based on resources or conditions generated by the mountains and rainfall patterns. Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards? The reason I'm asking is that I'd previously pondered taking existing maps from fantasy fiction (my own and others), putting the information into the system as a base point, and then generating a good world to play in from it. This seems covered by existing technology, that is, where the designer puts in all the necessary information such as the city populations, resources, biomes, road networks and rivers, then allows the PCG fill in the gaps. But now I'm wondering if it may be possible to have a content generator generate also the overall design. Generate the cities and population centres, balancing them so that there is a natural seeming need of commerce, then generate the positions and connectivity, then from the type of city produce the list of necessary resources that must be nearby, and only then, maybe given some rules on how to make the journey between cities both believable and interesting, generate the final content including the roads, the choke points, the bridges and tunnels, ferries and the terrain including the biomes and coastline necessary. If this has been done before, I'd like to know, and would like to know what went wrong, and what went right.

    Read the article

  • What are the advantages of programming to under an OS as opposed to bare metal executive?

    - by gby
    Assume you are presented with an embedded system application to program, in C, on a multi-core environment (think a Cavium or Tilera) and need to choose between two environments: Code the application under Linux in SMP mode or code the application under a thin bare metal executive (something like a very minimal RTOS), perhaps with a single core running UP Linux that can serve control tasks. For the purpose of this question, assume that both environment provide the same level of performance guarantees in any measurable aspects of run time performance, including number of meaningful action per second, jitter, latency, real time considerations - the works. (and yes, I realize this is by far not a trivial assumption at all, bare with me). How would you justify going with a Linux SMP based solution rather then a bare metal thin executive solution? The question may seems silly. It certainly seems obvious to me - but I have to convince someone that does not think the same. Could you help make a list of arguments in favor of choosing a real SMP aware OS (Linux) vs. a bare metal executive assuming performance guarantees are NOT an issue? Many thanks

    Read the article

  • Excel Matching problem with logic expression

    - by abelenky
    I have a block of data that represents the steps in a process and the possible errors: ProcessStep Status FeesPaid OK FormRecvd OK RoleAssigned OK CheckedIn Not Checked In. ReadyToStart Not Ready for Start I want to find the first Status that is not "OK". I have attempted this: =Match("<>""OK""", StatusRange, 0) which is supposed to return the index of the first element in the range that is NOT-EQUAL (<) to "OK" But this doesn't work, instead returning #N/A. I expect it to return 4 (index #4, in a 1-based index, representing that CheckedIn is the first non-OK element) Any ideas how to do this?

    Read the article

  • Concurrency with listeners and upload: is this solution sound?

    - by cbrulak
    I'm working on an Android app. We have listeners for position data and camera capture which is timed on a background thread (not a service). The timer thread dictates when the image is captured and also when the data is cached in a local sqlite db. So, my question is how to properly store the location data as it comes in based on the listeners and pull that data so that the database can be updated as the camera capture is executed. I can't put the location data into the database as it arrives because it is processed more frequently than the camera images and the camera images are what dominates the architecture at the moment. (Location data is supplement). However I need the location data to get a rough location of the where the image is captured. My first thought was to have singleton store the location data. And have a semaphore on the getInstance method so if the database update is happening (after an image is captured) we don't have an error. The location data can wait for the database update or it can be lost for that particular event it doesn't really matter. What are you thoughts? Am I on the right track? (And is this the right sub-site or would this be better on stackoverflow?)

    Read the article

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • HPC Cluster planning workflow?

    - by Veronica
    After three days of intensive Google searching, I have not found any high-level workflow of how to build a low profile - cheap - computing cluster (we are not interested in HA yet). This is just a front-end plus a node for now. We want to start small with rockscluster, provide a web-based server for offering services, and then add nodes as our budget increases. We're small company, so we haven't enough human resources to implement it smoothly. Here are some facts about our environment: Our hardware is not constant (we will add nodes). Our workload will vary (in the order from 200Mb - 1Tb) Our software will change (scientific applications for data mining) Do you know any visual workflow, worksheet, chart, describing the general necessary steps to begin our cluster planning?

    Read the article

  • Trouble accessing network drives in Windows 7

    - by Warlax
    Hi, Recently purchased a LaCiE network drive. Connected it to my router, configured it, added a user for myself. Installed the "Network Assistant" that came with it - no problem. Went to My Computer Network, found the network drive - was prompted for a login/password to see the contents, types \user and password (leading slash to get rid of DOMAIN) and accesses contents. There is one open share that I can access/mount without problem and there's another share with my user name as set-up via the network drive's web-based config. Double-clicked on that, prompted again. user/password doesn't work! It says I have no permissions but the user/pass combination is PERFECT. Some Windows Vista posts talk about a Local Security Policy menu - but that's nowhere to be found. Any ideas? (Windows 7 Professional BTW).

    Read the article

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

  • Why does DataContractJsonSerializer not include generic like JavaScriptSerializer?

    - by Patrick Magee
    So the JavaScriptSerializer was deprecated in favor of the DataContractJsonSerializer. var client = new WebClient(); var json = await client.DownloadStringTaskAsync(url); // http://example.com/api/people/1 // Deprecated, but clean looking and generally fits in nicely with // other code in my app domain that makes use of generics var serializer = new JavaScriptSerializer(); Person p = serializer.Deserialize<Person>(json); // Now have to make use of ugly typeof to get the Type when I // already know the Type at compile type. Why no Generic type T? var serializer = new DataContractJsonSerializer(typeof(Person)); Person p = serializer.ReadObject(json) as Person; The JavaScriptSerializer is nice and allows you to deserialize using a type of T generic in the function name. Understandably, it's been deprecated for good reason, with the DataContractJsonSerializer, you can decorate your Type to be deserialized with various things so it isn't so brittle like the JavaScriptSerializer, for example [DataMember(name = "personName")] public string Name { get; set; } Is there a particular reason why they decided to only allow users to pass in the Type? Type type = typeof(Person); var serializer = new DataContractJsonSerializer(type); Person p = serializer.ReadObject(json) as Person; Why not this? var serializer = new DataContractJsonSerializer(); Person p = serializer.ReadObject<Person>(json); They can still use reflection with the DataContract decorated attributes based on the T that I've specified on the .ReadObject<T>(json)

    Read the article

  • Representing heightmaps, on disk and when drawing

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: Game will be a elevation maze shooter. Levels/maps will be mazes with elevation the player has to negotiate. Elevations will have effects on "combat" vision, and movement.

    Read the article

  • How to Enable User-Specific Wireless Networks in Windows 7

    - by The Geek
    Wireless network settings in Windows 7 are global across all users, but there’s a little-known option that lets you switch them to per-user, so each user has access to only the networks they are allowed to connect to. Here’s how it all works. How is this useful? Maybe you want to prevent a particular user from accessing the internet—if you don’t give them the wireless password, they won’t be able to get online. This could be very useful if you’ve got mini-people playing games on the family PC, but you don’t want them getting online Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • Microsoft Outlook tips and tricks for improving user experience?

    - by Roee Adler
    I'm one of those heavy Microsoft Outlook users, currently working on the 2007 version. God knows this tool is heavy and may impose problems. I wondered what the Super User crowd has to suggest in order to improve the usage experience. Several suggestions of my own: Always work in cached mode (Tools--Account Settings--Change--Use Cached Exchange Mode) Use Outlook's local archiving capabilities Use Outlook's RSS reader - it's simple and allows offline access to your feeds If you have e-mail subscriptions to magazines, blogs, etc. - create a subdirectory to keep them, and a rule to automatically move them there when they arrive (one rule per subscription, based on the sender e-mail.) You can also share suggestions that require configuration of Exchange Server, for those of us who can make bring them to their IT managers. What are your suggestions? PS: "Use Gmail" is not an accepted answer, some of us don't control what email system we use...

    Read the article

  • Learn Advanced ADF online – for free by Grant Ronald

    - by JuergenKress
    DF knowledge is key for any BPM implementation! The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: ADF,Grant Ronald,ADF training,education,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >