Search Results

Search found 23568 results on 943 pages for 'select'.

Page 564/943 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • MySQL 5.5.9 Query Cache not working.

    - by thepearson
    I am running MySQL 5.5.9 x86_64 RPM as downloaded from mysql.com. Running on CentOS 5.5 Xen DomU. I have enabled the Query_cache however MySQL NEVER uses it. All of my tables are InnoDB. Why is the Qcache never hit? Here are my settings. mysql> SELECT VERSION(); +-----------+ | VERSION() | +-----------+ | 5.5.9 | +-----------+ 1 row in set (0.00 sec) mysql> SHOW VARIABLES LIKE '%query_cache%'; +------------------------------+-----------+ | Variable_name | Value | +------------------------------+-----------+ | have_query_cache | YES | | query_cache_limit | 2097152 | | query_cache_min_res_unit | 4096 | | query_cache_size | 536870912 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | +------------------------------+-----------+ 6 rows in set (0.00 sec) mysql> show status like 'Qcache%'; +-------------------------+-----------+ | Variable_name | Value | +-------------------------+-----------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 536852824 | | Qcache_hits | 0 | | Qcache_inserts | 0 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 7665775 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 1 | +-------------------------+-----------+ 8 rows in set (0.00 sec)

    Read the article

  • Search Work Items for TFS 2010 - New Extension

    - by MikeParks
    A few months ago I was constantly using Visual Studio 2008 with Team Foundation Server 2008. Searching for work items with queries inside Visual Studio became a pain until I found an add in that simplifed it into one little search box in the IDE.  It allowed me to enter some text in, hit the enter key, and it would bring back a list (aka open a .wiq file) of work items that matched the text entered. I became a huge fan of Noah Coad's Search Work Item Add In. He wrote a pretty good blog on how to use it as well. Of course when we upgraded to Visual Studio 2010 and Team Foundation Server 2010, the 2008 add in no longer worked. I didn't see any updates for it on codeplex to be 2010 compatible. Cory Cissell and I have published a few Visual Studio Extensions already so I figured I'd take a shot at making this tool 2010 compatible by turning it into an extension. Sure enough, it worked. We used it locally for a while and recently decided to publish it to the Visual Studio Gallery. If you are currently looking for an easy way to search work items in Visual Studio 2010, this is worth checking out. Big thanks goes out to Noah for originally creating this on codeplex. The extension we created can be downloaded here: http://visualstudiogallery.msdn.microsoft.com/en-us/3f31bfff-5ecb-4e05-8356-04815851b8e7      * Additional note: The default search fields are Title, History, and Description. If you want to modify which work item fields are searchable, type in "--template" (no quotes) into the search box and hit enter. This will open the search template. Just add another "Or" statement, pick the field name, select an operator, type "[search]" (no quotes) in the value field, and hit ctrl + s to save. The next time you run a search it will use the modified search template. That's all for now. Thanks! - Mike

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • Save Upgrade downtime: Upgrade APEX upfront

    - by Mike Dietrich
    With almost every patch or release upgrade of the Oracle Database a new version of Oracle Application Express (APEX) will be installed. And as APEX is part of the database installation it will be upgraded as part of the component upgrades after the ORACLE SERVER component has been successfully upgraded to the new releases. But the APEX upgrade can take a bit (several minutes or even more in some cases). Therefore it is a common advice to upgrade APEX upfront before upgrading the database as this can be done online while the database is in production (unless your databases serves just as an APEX application backend - in this case upgrading APEX upfront won't save you anything). To upgrade Oracle APEX upfront you'll have to followMOS Note:1088970.1. It explains that you'll have to: Determine the installation type by running this query:select count(*) from <SCHEMA>.WWV_FLOWS where id = 4000;whereas <SCHEMA> can be one of the following:FLOWS_010500 1.5.X FLOWS_010600 1.6.X FLOWS_020000 2.0.X FLOWS_020100 2.1.X FLOWS_020200 2.2.X FLOWS_030000 3.0.X FLOWS_030100 3.1.X  APEX_030200 3.2.X APEX_040000 4.0.XAPEX_040100 4.1.XAPEX_040200 4.2.XIf the query returns 0 then you'll need to run apxrtins.sqlIf the query returns 1 then you'll need to execute apexins.sql Download the newest APEX package and install it. -Mike . 

    Read the article

  • Multiple LiveCD iso's on a single USB drive

    - by Keck
    I am looking to create a USB flash drive that I can put multiple LiveCD iso's on and select which boots from startup. The ideal candidate supports linux and windows based iso's, and is relatively simple. It also must have some reasonable process for adding and removing iso from the drive/list. Things that I'm not looking for this specific question: UBCD or other swiss-army knife livecd's. The point is to boot any one of multiple CD's, not to boot a (certainly useful) utility CD. Installing a single LiveCD to a USB drive. I'd like to have multiple iso images, selectable at startup. I don't have a specific purpose in mind, possibilties include a single drive with a knoppix variant, ubuntu desktop, UBCD for dos, UBCD4Win, the Offline NT Password Cracker, etc. Flexible and easy to use are the name of the game!

    Read the article

  • SQL SERVER – Get Directory Structure using Extended Stored Procedure xp_dirtree

    - by pinaldave
    Many years ago I wrote article SQL SERVER – Get a List of Fixed Hard Drive and Free Space on Server where I demonstrated using undocumented Stored Procedure to find the drive letter in local system and available free space. I received question in email from reader asking if there any way he can list directory structure within the T-SQL. When I inquired more he suggested that he needs this because he wanted set up backup of the data in certain structure. Well, there is one undocumented stored procedure exists which can do the same. However, please be vary to use any undocumented procedures. xp_dirtree 'C:\Windows' Execution of the above stored procedure will give following result. If you prefer you can insert the data in the temptable and use the same for further use. Here is the quick script which will insert the data into the temptable and retrieve from the same. CREATE TABLE #TempTable (Subdirectory VARCHAR(512), Depth INT); INSERT INTO #TempTable (Subdirectory, Depth) EXEC xp_dirtree 'C:\Windows' SELECT Subdirectory, Depth FROM #TempTable; DROP TABLE #TempTable; Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Atheros wireless not working

    - by Chandru1
    I have been struggling hard since i have installed Ubuntu 10.10 but it has been difficult for me to get my wifi working. So here is what i tried. First i checked whether i have the driver using the ifconfig command and it shows the wireless lan driver as wlan0. Next, i tried the command iwlist wlan0 scanning by becoming the root which gave me the output as no scan results. Next, i visited this link https://help.ubuntu.com/community/WifiDocs/Driver/Atheros to see as to what problem my laptop may have. I do own have an ath5k chipset. And as i followed the instructions in the above link in one of the blacklist-ath_pci.conf file had this written in it. For some Atheros 5K RF MACs, the madwifi driver loads buts fails to correctly initialize the hardware, leaving it in a state from which ath5k cannot recover. To prevent this condition, stop madwifi from loading by default. Use Jockey to select one driver or the other. (Ubuntu: #315056, #323830 I am not that good at Linux but i have given it a try. I am desperate to have my wifi working and i would be glad if this community could help. ADDED: If anyone would like to know as to what drivers i am using this is the output. network description: Wireless interface product: AR2413 802.11bg NIC vendor: Atheros Communications Inc. physical id: 3 bus info: pci@0000:0a:03.0 logical name: wlan0 version: 01 serial: 00:19:7d:d3:0c:fd width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath5k driverversion=2.6.35-24-generic firmware=N/A latency=168 link=no maxlatency=28 mingnt=10 multicast=yes wireless=IEEE 802.11bg resources: irq:18 memory:d0000000-d000ffff Some more information and output as to what i have done. lsmod | grep ath ath5k 130083 0 mac80211 231541 1 ath5k ath 8153 1 ath5k cfg80211 144470 3 ath5k,mac80211,ath led_class 2633 1 ath5k

    Read the article

  • Oracle Congratulates Winners of the 2012 Oracle Excellence Award: Eco-Enterprise Innovation

    - by Evelyn Neumayr
    Oracle recently held its fifth annual Eco-Enterprise Innovation awards ceremony during Oracle OpenWorld in San Francisco. Oracle Chairman of the Board, Jeff Henley, awarded select customers for their use of Oracle products to help with their sustainability initiatives. During this session, several award recipients discussed how they embedded various sustainability strategies throughout their organizations to help reduce their costs as well as their environmental footprint. It was an interesting session based around green best business practices and how Oracle products enabled many of these customers’ sustainability efforts. The winning customers for 2012 are: Dena Bank, Earth Rangers Centre, Grupo Pão de Açúcar, Health Authority – Abu Dhabi, Korean Air, North County Transit District, Orlando Utilities Commission, Ricoh – Europe, Schneider Electric, Severn Trent Water, and Terracap. Several of these winning customers also selected a partner to co-accept the award with them. These winning partners played a major role in helping these customers achieve their sustainability-related efforts.. Oracle also awarded Ian Winham, Executive Vice President and Chief Financial Officer from Ricoh Europe, with Oracle's Chief Sustainability Officer of the Year award. Ricoh Europe is a multinational imaging and electronics company with a strong commitment to sustainability. Ian was honored for his leadership in reducing Ricoh's environmental impacts by leveraging Oracle's applications and underlying technology. See here for more details.

    Read the article

  • Excel 2010 filter arrow not showing text values

    - by DVP
    I have an odd problem on a tracker spreadsheet I use. All the columns have a filter, but when you click on the filter arrow it doesn't show you a breakdown of all the text values for that column. All it shows is the usual 'sort A to Z/Z to A', but the bottom half of the pop-up screen is blank, where normally you have a list of text values that you can further filter by putting a tick next to each. It only displays (Select All) which you can tick, but its pointless as the column has selected all text values and hasn't been further filtered, which is what I need to do.

    Read the article

  • How does RAM fail?

    - by ethanlee16
    I have an issue with a Dell Inspiron 15 (1545) laptop that refuses to open any applications (save select Microsoft programs, e.g. Security Essentials, Ctrl Panel, Windows Explorer (not Internet), regedit, Event Viewer, etc.). I've run Microsoft Memory Diagnostics Tool and it found a 'hardware problem was detected.' Does this indicate that the RAM has failed? I notice when I open programs like Word, Excel, Internet Explorer, etc., it always give me an error from WerFault.exe saying The instruction at xxxxxxx referenced memory at xxxxxxxxx. The memory could not be written. and sometimes something about illegal instructions. If it is a hardware problem, does this mean that replacing the RAM is my only option? Again, I would also like to know if RAM can fail (like hard drives) and if malware can cause RAM to fail also.

    Read the article

  • Can't connect to STunnel when it's running as a service

    - by John Francis
    I've got STunnel configured to proxy non SSL POP3 requests to GMail on port 111. This is working fine when STunnel is running as a desktop app, but when I run the STunnel service, I can't connect to port 111 on the machine (using Outlook Express for example). The Stunnel log file shows the port binding is succeeding, but it never sees a connection. There's something preventing the connection to that port when STunnel is running as a service? Here's stunnel.conf cert = stunnel.pem ; Some performance tunings socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 ; Some debugging stuff useful for troubleshooting debug = 7 output = stunnel.log ; Use it for client mode client = yes ; Service-level configuration [gmail] accept = 127.0.0.1:111 connect = pop.gmail.com:995 stunnel.log from service 2010.10.07 12:14:22 LOG5[80444:72984]: Reading configuration from file stunnel.conf 2010.10.07 12:14:22 LOG7[80444:72984]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:14:23 LOG7[80444:72984]: PRNG seeded successfully 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Certificate loaded 2010.10.07 12:14:23 LOG7[80444:72984]: Key file: stunnel.pem 2010.10.07 12:14:23 LOG7[80444:72984]: Private key loaded 2010.10.07 12:14:23 LOG7[80444:72984]: SSL context initialized for service gmail 2010.10.07 12:14:23 LOG5[80444:72984]: Configuration successful 2010.10.07 12:14:23 LOG5[80444:72984]: No limit detected for the number of clients 2010.10.07 12:14:23 LOG7[80444:72984]: FD=156 in non-blocking mode 2010.10.07 12:14:23 LOG7[80444:72984]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:14:23 LOG7[80444:72984]: Service gmail opened FD=156 2010.10.07 12:14:23 LOG5[80444:72984]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:14:23 LOG5[80444:72984]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 stunnel.log from desktop (working) process 2010.10.07 12:10:31 LOG5[80824:81200]: Reading configuration from file stunnel.conf 2010.10.07 12:10:31 LOG7[80824:81200]: Snagged 64 random bytes from C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: Wrote 1024 new random bytes to C:/.rnd 2010.10.07 12:10:32 LOG7[80824:81200]: PRNG seeded successfully 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Certificate loaded 2010.10.07 12:10:32 LOG7[80824:81200]: Key file: stunnel.pem 2010.10.07 12:10:32 LOG7[80824:81200]: Private key loaded 2010.10.07 12:10:32 LOG7[80824:81200]: SSL context initialized for service gmail 2010.10.07 12:10:32 LOG5[80824:81200]: Configuration successful 2010.10.07 12:10:32 LOG5[80824:81200]: No limit detected for the number of clients 2010.10.07 12:10:32 LOG7[80824:81200]: FD=156 in non-blocking mode 2010.10.07 12:10:32 LOG7[80824:81200]: Option SO_REUSEADDR set on accept socket 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail bound to 0.0.0.0:111 2010.10.07 12:10:32 LOG7[80824:81200]: Service gmail opened FD=156 2010.10.07 12:10:33 LOG5[80824:81200]: stunnel 4.34 on x86-pc-mingw32-gnu with OpenSSL 1.0.0a 1 Jun 2010 2010.10.07 12:10:33 LOG5[80824:81200]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.10.07 12:10:33 LOG7[80824:81844]: Service gmail accepted FD=188 from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:81844]: Creating a new thread 2010.10.07 12:10:33 LOG7[80824:81844]: New thread created 2010.10.07 12:10:33 LOG7[80824:25144]: Service gmail started 2010.10.07 12:10:33 LOG7[80824:25144]: FD=188 in non-blocking mode 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on local socket 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail accepted connection from 127.0.0.1:24813 2010.10.07 12:10:33 LOG7[80824:25144]: FD=212 in non-blocking mode 2010.10.07 12:10:33 LOG6[80824:25144]: connect_blocking: connecting 209.85.227.109:995 2010.10.07 12:10:33 LOG7[80824:25144]: connect_blocking: s_poll_wait 209.85.227.109:995: waiting 10 seconds 2010.10.07 12:10:33 LOG5[80824:25144]: connect_blocking: connected 209.85.227.109:995 2010.10.07 12:10:33 LOG5[80824:25144]: Service gmail connected remote server from 192.168.1.9:24814 2010.10.07 12:10:33 LOG7[80824:25144]: Remote FD=212 initialized 2010.10.07 12:10:33 LOG7[80824:25144]: Option TCP_NODELAY set on remote socket 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): before/connect initialization 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server hello A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server certificate A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read server done A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write client key exchange A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write change cipher spec A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 write finished A 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 flush data 2010.10.07 12:10:33 LOG7[80824:25144]: SSL state (connect): SSLv3 read finished A 2010.10.07 12:10:33 LOG7[80824:25144]: 1 items in the session cache 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects (SSL_connect()) 2010.10.07 12:10:33 LOG7[80824:25144]: 1 client connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 client renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects (SSL_accept()) 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server connects that finished 2010.10.07 12:10:33 LOG7[80824:25144]: 0 server renegotiations requested 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 external session cache hits 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache misses 2010.10.07 12:10:33 LOG7[80824:25144]: 0 session cache timeouts 2010.10.07 12:10:33 LOG6[80824:25144]: SSL connected: new session negotiated 2010.10.07 12:10:33 LOG6[80824:25144]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2010.10.07 12:10:34 LOG7[80824:25144]: SSL socket closed on SSL_read 2010.10.07 12:10:34 LOG7[80824:25144]: Sending socket write shutdown 2010.10.07 12:10:34 LOG5[80824:25144]: Connection closed: 53 bytes sent to SSL, 118 bytes sent to socket 2010.10.07 12:10:34 LOG7[80824:25144]: Service gmail finished (0 left)

    Read the article

  • Is my Windows partition too far down on the disk?

    - by Trevor Alexander
    I have /boot/ on /dev/sda1 (1GB), followed by my Linux root LVM on /dev/sda2 (1.3GB). Finally, I installed Windows 7 on /dev/sda3 in the remaining 700GB of space. When I select Windows 7 in the grub menu, I get something like the following error and am thrown to grub4dos: find --set-root --ignore-floppies --ignore-cd /bootmgr Error 15: file not found Unable to locate necessary tables for adjustment. None of the options in grub4dos return anything but the above error. I heard that 1TB is the upper limit for locating Windows 7 partitions; is this true? How can I fix the above?

    Read the article

  • Microsoft C# Most Valuable Professional

    - by Robz / Fervent Coder
    Recently I was awarded the Microsoft Most Valuable Professional (MVP) for Visual C#. For those that don’t know it’s an annual award based on nominations from peers and Microsoft. Although there are just over 4,000 MVPs worldwide from all kinds of specializations, there are less than 100 C# MVPs in the US. There is more information at the site: https://mvp.support.microsoft.com The Microsoft MVP Award is an annual award that recognizes exceptional technology community leaders worldwide who actively share their high quality, real world expertise with users and Microsoft. With fewer than 5,000 awardees worldwide, Microsoft MVPs represent a highly select group of experts. MVPs share a deep commitment to community and a willingness to help others. To recognize the contributions they make, MVPs from around the world have the opportunity to meet Microsoft executives, network with peers, and position themselves as technical community leaders. Here is my profile: https://mvp.support.microsoft.com/default.aspx/profile/rob.reynolds I want to thank those that nominated me, without nominations this would never have happened. Thanks to Microsoft for liking me and finding my achievements and contributions to the community to be worth something. It’s good to know when you put in a lot of hard work that you get rewarded! I also want to thank many of the people I have worked with over the last 7 years. You guys have been great and I’m definitely standing on the shoulders of giants! Thanks to KDOT for giving me that first shot into professional programming and the experience and all of the training! A special thanks to @drusellers for kick starting me when I went stale in my learning back in 2007 and for always pushing me and bouncing ideas off of me. Without you I don’t think I would have made it this far. Thanks Alt.NET for keeping it fresh and funky! A very special thank you goes out to my wife for supporting me and locking me in the basement to work on all of my initiatives!

    Read the article

  • Thursday at OpenWorld: Identity Management

    - by Tanu Sood
    Before you know it, we are at the last day at Oracle OpenWorld. But just the same, Thursday is packed with informational, educational and networking opportunities. Here’s what is in store for you today: Thursday, October 4, 2012 CON5749: Solutions for Migration of Oracle Waveset to Oracle Identity Manager 11:15 a.m. – 12:15 p.m., Moscone West 3008 Many customers of Oracle Waveset (formerly Sun Identity Manager) are planning a migration to the strategic provisioning product Oracle Identity Manager. There are several approaches to migrating to Oracle Identity Manager. Presented by Hub City Media and Oracle, this session covers these various approaches to help you select the optimum choice for your implementation. CON9640: Evolving Identity Management 12:45 p.m. – 1:45 p.m., Moscone West 3008 Identity management requirements have evolved and are continuing to evolve as organizations seek to secure cloud and mobile access.  Customers are seeing good success reducing costs and supporting business growth with by embracing a service-oriented, platform approach to addressing identity management requirements.  This session will explore these emerging requirements and share best practices for evolving your implementation. CON9662: Securing Oracle Applications with the Oracle Enterprise Identity Management Platform 2:15 p.m. – 3:15 p.m., Moscone West 3008 Oracle Enterprise Identity Management solutions are designed to secure access and simplify compliance to Oracle Applications.  Whether you are an EBS customer looking to upgrade from Oracle Single Sign-on or a Fusion Application customer seeking to leverage the Identity instance as an enterprise security platform, this session with Qualcomm and Oracle will help you understand how to get the most out of your investment. HOL10479: Integrated Identity Governance 12:45 p.m. – 1:45 p.m., Marriott Marquis – Salon 1/2 This hands-on lab demonstrates Oracle’s integrated and self-service-oriented identity governance solution, which includes simple access request, business-user-friendly access certification, closed-loop remediation, and both standard and privileged accounts. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • How to Skip the Start Screen and Boot to the Desktop in Windows 8.1

    - by Mark Wilson
    For almost everyone who made the upgrade, Windows 8 proved to be something of a disappointment for one reason or another. Windows 8.1 (or Windows Blue) was released to address many of the issues users had complained about including reintroducing the ability to boot straight to the desktop. Being able to boot to the desktop rather than the Start screen is something that people have been clammering for ever since the first preview versions of Windows 8 were unveiled. There have been various third-party tools released as numerous workarounds used to get around the problem, but now it is an option that is built directly into the operating system. You’ll need to have downloaded and installed the update in order to proceed, but once you have done this, things are very simple. When you have Windows up and running after the upgrade, right click an empty section of the taskbar and select properties to bring up the newly named “Taskbar and Navigation properties” dialog.  Move to the Navigation tab and look in the “Start screen” section in the lower half of the dialog. Check the box labelled ‘Go to the desktop instead of Start when I sign in” and click OK.    

    Read the article

  • Excel scatter chart with multiple date ranges

    - by Abiel
    I have multiple blocks of time series data on an Excel sheet, with each block having its own set of dates. For example, I might have dates in column A, values in column B, and then dates in column D and values in column E. The values in B go with the dates in A, and the values in E go with the dates in D. The dates in A and D may not be the same. I would like to create a scatter chart with a time category axis that is the union of my two input date ranges in columns A and D. If I select all the data and then go insert chart (in Excel 2010), Excel treats only column A as the X axis, and looks at D as just another set of values. I can get Excel to do what I want by first just charting columns A and B, then selecting D and E and copy-pasting onto the chart. However, I would like to avoid this two-step procedure if possible.

    Read the article

  • how to print labels from UPS printer on UPS website

    - by paynes_bay
    I have several computers, in my office, that have UPS printers attached to them. On most of these computers if you go to ups.com, login, and print a shipping label out, it prints out just fine. The website somehow selects the appropriate printer and prints to it. It doesn't present a prompt, asking you to select the printer, the number of pages, etc - it just prints it. Only problem: there's one computer on which it's not doing this and I don't know why. I can see the printer in Printers and Faxes and can print out test pages from the Properties tab, so the printer clearly works - it just isn't printing out from UPS's website. Any ideas?

    Read the article

  • SVN do unnecessary chmod on .svn/tempfiles

    - by ???
    My working dir is on an TrueCrypt NTFS volume, with umask 000. So I can read/write on any files with no problem. But I can't do svn command on it, for example `svn update' shows error: svn: Can't set permissions on '.svn/tempfile.8.tmp': strace svn up gives: ... chmod("sbin/.svn/tempfile.2.tmp", 0770) = -1 EPERM (Operation not permitted) fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 write(3, "( failure ( ( 1 76:Can't set per"..., 172) = 172 fcntl64(3, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK) fcntl64(3, F_SETFL, O_RDWR) = 0 read(3, "( abort-edit ( ) ) ( failure ( ("..., 4096) = 191 gettimeofday({1276661368, 382789}, NULL) = 0 lstat64("sbin", {st_mode=S_IFDIR|0770, st_size=0, ...}) = 0 select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) write(2, "svn: Can't set permissions on 's"..., 82svn: Can't set permissions on 'sbin/.svn/tempfile.2.tmp': Operation not permitted) = 82 close(3) = 0 So, the error occurred when svn chmod on some tmp files. But this is disallowed in the TrueCrypt volumes, and it's just unnecessary. Can I bypass the chmod lib calls when launch svn on TrueCrypt volumes?

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • Why does iTunes make 2 copies of my music when adding to library?

    - by NoCatharsis
    I set iTunes to "Keep iTunes Media folder organized" and to "Copy files to iTunes Media folder when adding to library" because I prefer to keep my music consolidated, organized, and consistent. However, when I have MP3s that are external to iTunes, then try to add via File Add Folder to Libary, iTunes creates 2 copies of the file in the iTunes folder - one with the original song name and another with the original song name followed by the number 1. Here is what I thought would happen, and I hope is possible: 1) Click File Add Folder to Library 2) Select folder external to iTunes 3) Click OK 4) iTunes creates a clean new folder in the iTunes Music directory with exactly 1 of each file 5) Only 1 of each song is shown within iTunes Is this too much to ask? I am not an iTunes fan at all after 2 years dealing with the poor programming of this application. I hope someone can help me find the faith...

    Read the article

  • HTML5-MVC application using VS2010 SP1

    - by nmarun
    This is my first attempt at creating HTML5 pages. VS 2010 allows working with HTML5 now (you just need to make a small change after installing SP1). So my Razor view is now a HTML5 page. I call this application - 5Commerce – (an over-simplified) HTML5 ECommerce site. So here’s the flow of the application: home page renders user enters first and last name, chooses a product and the quantity can enter additional instructions for the order place the order user is then taken to another page showing the order details Off to the details. This is what my page looks in Google Chrome 10 beta (or later) soon after it renders. Here are some of the things to observe on this. Look a little closer and you’ll see a border around the first name textbox – this is ‘autofocus’ in action. I’ve set the autofocus attribute on this textbox. So as soon as the page loads, this control gets focus. 1: <input type="text" autofocus id="firstName" class="inputWidth" data_minlength="" 2: data_maxlength="" placeholder="first name" /> See a partially grayed out ‘last name’ text in the second textbox. This is set using a placeholder attribute (see above). It gets wiped out on-focus and improves the UI visuals in general. The quantity textbox is actually a numerical-only textbox. 1: <input type="number" id="quantity" data_mincount="" class="inputWidth" /> The last line is for additional instructions. This looks like a label but it’s content is editable. Just adding the ‘contenteditable’ attribute to the span allow the user to edit the text inside. 1: <span contenteditable id="additionalInstructions" data_texttype="" class="editableContent">select text and edit </span> All of the above is just plain HTML (no lurking javascript acting in here). Makes it real clean and simple. Going more into the HTML, I see that the _Layout.cshtml already is using some HTML5 content. I created my project before installing SP1, so that was the reason for my surprise. 1: <!DOCTYPE html> This is the doctype declaration in HTML5 and this is supported even by IE6 (just take my word on IE6 now, don’t go install it to test it, especially when MS is doing an IE6 countdown). That’s just amazing and extremely easy to read remember and talk about a few less bytes on every call! I modified the rest of my _Layout.cshtml to the below: 1: <!DOCTYPE html> 2: <html> 3: <head> 4: <title>5Commerce - HTML 5 Ecommerce site</title> 5: <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> 6: <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> 7: <script src="@Url.Content("~/Scripts/CustomScripts.js")" type="text/javascript"></script> 8: <script type="text/javascript"> 9: $(document).ready(function () { 10: WireupEvents(); 11: }); 12:</script> 13:  14: </head> 15:  16: <body role="document" class="bodybackground"> 17: <header role="heading"> 18: <h2>5Commerce - HTML 5 Ecommerce site!</h2> 19: </header> 20: <section id="mainForm"> 21: @RenderBody() 22: </section> 23: <footer id="page_footer" role="siteBaseInfo"> 24: <p>&copy; 2011 5Commerce Inc!</p> 25: </footer> 26: </body> 27: </html> I’m sure you’re seeing some of the new tags here. To give a brief intro about them: <header>, <footer>: Marks the header/footer region of a page or section. <section>: A logical grouping of content role attribute: Identifies the responsibility of an element. This attribute can be used by screen readers and can also be filtered through jQuery. SP1 also allows for some intellisense in HTML5. You see the other types of input fields – email, date, datetime, month, url and there are others as well. So once my page loads, i.e., ‘on document ready’, I’m wiring up the events following the principles of unobtrusive javascript. In the snippet below, I’m controlling the behavior of the input controls for specific events. 1: $("#productList").bind('change blur', function () { 2: IsSelectedProductValid(); 3: }); 4:  5: $("#quantity").bind('blur', function () { 6: IsQuantityValid(); 7: }); 8:  9: $("#placeOrderButton").click( 10: function () { 11: if (IsPageValid()) { 12: LoadProducts(); 13: } 14: }); This enables some client-side validation to occur before the data is sent to the server. These validation constraints are obtained through a JSON call to the WCF service and are set to the ‘data_’ attributes of the input controls. Have a look at the ‘GetValidators()’ function below: 1: function GetValidators() { 2: // the post to your webservice or page 3: $.ajax({ 4: type: "GET", //GET or POST or PUT or DELETE verb 5: url: "http://localhost:14805/OrderService.svc/GetValidators", // Location of the service 6: data: "{}", //Data sent to server 7: contentType: "application/json; charset=utf-8", // content type sent to server 8: dataType: "json", //Expected data format from server 9: processdata: true, //True or False 10: success: function (result) {//On Successfull service call 11: if (result.length > 0) { 12: for (i = 0; i < result.length; i++) { 13: if (result[i].PropertyName == "FirstName") { 14: if (result[i].MinLength > 0) { 15: $("#firstName").attr("data_minLength", result[i].MinLength); 16: } 17: if (result[i].MaxLength > 0) { 18: $("#firstName").attr("data_maxLength", result[i].MaxLength); 19: } 20: } 21: else if (result[i].PropertyName == "LastName") { 22: if (result[i].MinLength > 0) { 23: $("#lastName").attr("data_minLength", result[i].MinLength); 24: } 25: if (result[i].MaxLength > 0) { 26: $("#lastName").attr("data_maxLength", result[i].MaxLength); 27: } 28: } 29: else if (result[i].PropertyName == "Quantity") { 30: if (result[i].MinCount > 0) { 31: $("#quantity").attr("data_minCount", result[i].MinCount); 32: } 33: } 34: else if (result[i].PropertyName == "AdditionalInstructions") { 35: if (result[i].TextType.length > 0) { 36: $("#additionalInstructions").attr("data_textType", result[i].TextType); 37: } 38: } 39: } 40: } 41: }, 42: error: function (result) {// When Service call fails 43: alert('Service call failed: ' + result.status + ' ' + result.statusText); 44: } 45: }); 46:  47: //.... 48: } Just before the GetValidators() function runs and sets the validation constraints, this is what the html looks like (seen through the Dev tools of Chrome): After the function executes, you see the values in the ‘data_’  attributes. As and when we enter valid data into these fields, the error messages disappear, since the validation is bound to the blur event of the control. There you see… no error messages (well, the catch here is that once you enter THAT name, all errors disappear automatically). Clicking on ‘Place Order!’ runs the SaveOrder function. You can see the JSON for the order object that is getting constructed and passed to the WCF Service. 1: function SaveOrder() { 2: var addlInstructionsDefaultText = "select text and edit"; 3: var addlInstructions = $("span:first").text(); 4: if(addlInstructions == addlInstructionsDefaultText) 5: { 6: addlInstructions = ''; 7: } 8: var orderJson = { 9: AdditionalInstructions: addlInstructions, 10: Customer: { 11: FirstName: $("#firstName").val(), 12: LastName: $("#lastName").val() 13: }, 14: OrderedProduct: { 15: Id: $("#productList").val(), 16: Quantity: $("#quantity").val() 17: } 18: }; 19:  20: // the post to your webservice or page 21: $.ajax({ 22: type: "POST", //GET or POST or PUT or DELETE verb 23: url: "http://localhost:14805/OrderService.svc/SaveOrder", // Location of the service 24: data: JSON.stringify(orderJson), //Data sent to server 25: contentType: "application/json; charset=utf-8", // content type sent to server 26: dataType: "json", //Expected data format from server 27: processdata: false, //True or False 28: success: function (result) {//On Successfull service call 29: window.location.href = "http://localhost:14805/home/ShowOrderDetail/" + result; 30: }, 31: error: function (request, error) {// When Service call fails 32: alert('Service call failed: ' + request.status + ' ' + request.statusText); 33: } 34: }); 35: } The service saves this order into an XML file and returns the order id (a guid). On success, I redirect to the ShowOrderDetail action method passing the guid. This page will show all the details of the order. Although the back-end weightlifting is done by WCF, I did not show any of that plumbing-work as I wanted to concentrate more on the HTML5 and its associates. However, you can see it all in the source here. I do have one issue with HTML5 and this is an existing issue with HTML4 as well. If you see the snippet above where I’ve declared a textbox for first name, you’ll see the autofocus attribute just dangling by itself. It doesn’t follow the xml syntax of ‘key="value"’ allowing users to continue writing badly-formatted html even in the new version. You’ll see the same issue with the ‘contenteditable’ attribute as well. The work-around is that you can do ‘autofocus=”true”’ and it’ll work fine plus make it well-formatted. But unless the standards enforce this, there will be people (me included) who’ll get by, by just typing the bare minimum! Hoping this will get fixed in the coming version-updates. Source code here. Verdict: I think it’s time for us to embrace the new HTML5. Thank you HTML4 and Welcome HTML5.

    Read the article

  • SQL SERVER – How to Change Compatibility of Database to SQL Server 2014

    - by Pinal Dave
    Yesterday I wrote about how we can install SQL Server 2014. Right after the blog post was live, I received a question from the developer that he has installed SQL Server 2014 and attached a database file from previous version of SQL Server. Right after attaching database, he was not able to work with the latest features of Cardinality Estimation. As soon as he sent me email I realize what has happened exactly. When he attached database, the database compatibility was set to still of the earlier version of SQL Server. To use most of the latest features of SQL Server 2014, one has to change the compatibility level of the database to the latest version (i.e. 120). Here are two different ways how we can change the compatibility of database to SQL Server 2014′s version. 1) Using Management Studio For this method first to go database and right click over it. Now select properties. On this screen user can change the compatibility level to 120. 2) Using T-SQL Script. You can execute following script and change the compatibility settings to 120. USE [master] GO ALTER DATABASE [AdventureWorks2012] SET COMPATIBILITY_LEVEL = 120 GO   Well, it is that easy :-) Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Good fractal visualizer

    - by Gnoupi
    Each time it's the same thing, I find one, then I forget the name of it the next time I want to "dive" into such things. I'm looking for a good fractal "visualizer". The kind in which you just select a well-known fractal model (or variations on it), and in which you can then "dive" into, just zooming in or out smoothly, for the cheer pleasure of losing yourself in it. I'm looking for an executable, preferably for Windows, but any OS accepted. Keep to one program per answer (and one answer per program), as this is community wiki.

    Read the article

  • ubuntu one syncronizing problems

    - by user72249
    I am a user of Ubuntu and Ubuntu One. First: I had a serious problem with ubuntu one. I have a windows and several Ubuntu machine and they all sync one account. My first problem is that i can't really delete files. Whenever I delete something Ubu One resync it, so it appears again in my folder. Furthermore, when i move something to another dir it synconize the new location and resync the old one, so i got doubled my files. I can't reorganize my files. So i tried to delete the duplicated files through the web dash, but i cant reorder and select multiple files by extension. For example i wanted to move all PDF to another location... Second: I can't configure in the ubuntu one app the loacation of the main One folder. For example i want my One folder to be on another partition than my HOME folder. It is a main problem in linux and windows also. I tried to move that folder in windows with the hardlink method. So i uninstalled the U1, then i created a folder link to another drive, than installed the U1. THEN i lost all my files in my U1. Lucky thing that i have a backup, but i thought U1 is a stable solution for me, and i planned to extend my space, but these bugs are major problems! It's worth to pay for it if it's working perfectly. I think it's more important than releasing a new version in every month.

    Read the article

  • Fedora Installation with software repository in DVD does not work

    - by Raks
    I bought a new assembled PC with processor as Core-i3(2120) and Intel H61 motherboard and was trying to install Fedora 16 from a DVD. This DVD contains all the packages so that installation does not require to download packages from internet. I have used this DVD to install Fedora 16 offline many a times though in machines with different hardware configuration. But in this new machine when the installation reaches the stage wherein it asks for Software repository selection I select CD/DVD but the system fails to read the media and throws up an error that it cannot detect the media. THe LED in the DVD writer also indicate that the DVD is not being read. Now there is neither a problem with the DVD or the DVD drive because the installation started from the DVD itself. So what could the problem be, anything in the BIOS that is causing the problem, Is there any way I could utilize the packages already existing in the CD so that I save downloading the packages from DVD

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >