Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 896/1620 | < Previous Page | 892 893 894 895 896 897 898 899 900 901 902 903  | Next Page >

  • Google I/O 2012 - OAuth 2.0 for Identity and Data Access

    Google I/O 2012 - OAuth 2.0 for Identity and Data Access Ryan Boyd Users like to keep their data in one place on the web where it's easily accessible. Whether it's YouTube videos, Google Drive files, Google contacts or one of many other types of data, users need a way to securely grant applications access to their data. OAuth is the key web standard for delegated data access and OAuth 2.0 is the next-generation version with additional security features. This session will cover the latest advances in how OAuth can be used for data access, but will also dive into how you can lower the barrier to entry for your application by allowing users to login using their Google accounts. You will learn, through an example written in Python, how to use OAuth 2.0 to incorporate user identity into your web application. Best practices for desktop applications, mobile applications and server-to-server use cases will also be discussed. From: GoogleDevelopers Views: 11 1 ratings Time: 58:56 More in Science & Technology

    Read the article

  • How do I change which audio jacks are used for input and output?

    - by yamaha1996
    I'm using a Realtek HD audio card built-in my motherboard. The Windows driver comes with a control panel that allows me to select which back panel jacks are used for what. So for example I can make both the blue jack and green jack for output and only the red one for mic-in. (Whereas by default, the blue jack is for line in, which I never need.) How can I do the same under Linux? If possible, please don't suggest something that involves PulseAudio or JACK; I'd like to do it the plain way, e.g. by editing ALSA configuration files, if possible. The way I understand it, my problem should have nothing to do with software servers redirecting streams, just instructing the driver to treat this jack as so and so because it's hardware supported. Thank you very much!

    Read the article

  • Can find any /.ecryptfs dir to retrieve my encrypted home dir

    - by Roberto de Armas
    At firs sorry for my english, it isn't my native languaje. I've readed some questions similar but no the exactly whith the same problem. I've moved my home directory to a separated partitión (Ubuntu 11.10) following this tutorial http://www.ubuntu-es.org/node/58233 After checking that they were all my files and folders (forgetting that one of dirs was encrypted by ecryptfs) i've installed fedora 16. Well, suprised when in my home folder was an Readme.txt advising me that my folder was unmounted for security reasons and proposing to type in comand line "ecryptfs-mount-Private" (din't work) or make click on labeled icon "acces your private data desktop" (neither din't work). After three days reading all i could find on the internet, i follow The Dustin Kirkland tutorial in http://blog.dustinkirkland.com/2011/04/introducing-ecryptfs-recover-private.html, but any /.ecryptfs was found. I'm sure that the data are somewere (the size of the moved dir is identical to the original one). Any help would by greatly appreciated. Thaks a lot.

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • Why Ubuntu Softwares are not packaged in a single file?

    - by Anwar Shah
    We see Most of the Windows Softwares are packaged in a Single executable file. When I double-click Setup file, it sets up all the files, binaries and libraries with it. I understand the dependency of Ubuntu or more generally linux packages. But I wonder, Why these exists. Isn't it possible to build a single file with all dependencies. What is the problems with this method? Please try to give the reason in details.

    Read the article

  • Visual Web Developer 2010 Express, automated testing, and SVN

    - by Mr. Jefferson
    We have an HTML designer who is not a developer but needs to modify .aspx files from our ASP.NET 2.0 projects from time to time in order to get CSS to work properly with them. Currently, this involves giving her the .aspx page by itself, which she opens and edits via Visual Studio 2008 (her computer used to be a developer's). I'm considering getting her set up with Visual Web Developer 2010 Express and Subversion access so she can be more independent, but I wanted to make sure VS Express will work properly with what we do. So: Does VWD 2010 Express support automated tests? If no to the above, what happens when it opens a solution file that includes a test project, modifies it, and saves it? Are there any potential snags with setting up AnkhSVN with VWD 2010 Express?

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Trying to do a batch rename, can't figure out the proper RegEx

    - by trezy
    I'm trying to rename my movie collection. All of the files are currently named using dots instead of spaces, i.e. Men.in.Black.avi. I want to replace all of the dots with spaces which isn't terribly difficult, but I need to preserve the last dot for the file extension, i.e. .avi, .mp4, .ogg, etc. My Googling has provided no solutions. I'm also a Javascript developer and could see some snazzy applications for it. So, any suggestions?

    Read the article

  • JavaOne 2012 - The Power of Java 7 NIO.2

    - by Sharon Zakhour
    At JavaOne 2012, Mohamed Taman of e-finance gave a presentation highlighting the power of NIO.2, the file I/O APIs introduced in JDK 7. He shared information on how to get the most out of NIO.2, gave tips on migrating your I/O code to NIO.2, and presented case studies. The File I/O (featuring NIO.2) lesson in the Java Tutorials has extensive coverage of NIO.2 and includes the following topics: Managing Metadata Walking the File Tree Finding Files, including information on using PatternMatcher and globs. Watching a Directory for Changes Legacy File I/O Code includes information on migrating your code. From the conference session page, you can watch the presentation or download the materials.

    Read the article

  • How to get the native version of Spotify running?

    - by Dante Ashton
    A while ago Spotify (the streaming music service) came out with a preview for Linux of their client. I had succesfully run it throughout 10.04. Now I'm on 10.10, I can't seem to find it in the package manager, let alone install it. Software Sources gives me this; Failed to fetch http://repository.spotify.com/dists/stable/Release Unable to find expected entry non-free/source/Sources in Meta-index file (malformed Release file?) Some index files failed to download, they have been ignored, or old ones used instead. So...as I'm paying for Spotify what...umm...do I do? :P

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Should I avoid SharePoint Development in Visual Studio?

    - by SaphuA
    Hello, Not long ago I started an internship at a company that supplies SharePoint consultancy, hosting and development. While their consultancy seems to be pretty good and solid, I feel their development department lacks direction. The reason for this, most likely, is that they stopped outsourcing not too long ago. One thing that I've frequently bumped my head into is the following: My supervisor strongly insists that everything that can be done natively in SharePoint (somehow this includes editing xslt files in Designer) should be done in SharePoint. Even if this results in longer development time (at least when they make me write XSLT) and reduced usability. Her main arguments for this are: Better maintainability Editing the functionality doesn't require programming knowledge I feel the company is a little biassed and I am unable to get a decent discussion going. This is why I am looking for other places to get some responses on the subject (and not only on the arguments of my supervisor, but more on the subject in general). Kind regards

    Read the article

  • What You Said: Do You Use the Command Line?

    - by Jason Fitzpatrick
    Earlier this week we asked you to sound off with your love (or lack there of) for the command line. You sounded off in force and now we’re back with a comment roundup. It turns out you all pretty much love the command line with that love ranging from not even liking Graphic User Interfaces (GUIs) to using the command line to get serious work done but having a long standing affair with your OS’s GUI. Many of you lamented the poor command line implementation in Windows—especially after you’d had experience with other operation systems. Mike writes: Of course. Some things are easier that was. Like ping and ipconfig. With a strong Unix background I still write and use batch files. It would be nice is the command line included more nice things like grep, sleep, touch. Maybe, someday, Windows will mature into a full OS. What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Digital Asset Management System

    - by Prashant
    I am looking for an opensource web-based digital asset management system. My requirements are to create a web based system where users can upload and download .zip, .jpg, .png, .pdf, .doc, .xls etc. media files. Also user management should be there, so that we can create multiple users and accordingly give them permissions. I have found one http://www.resourcespace.org/ but it looks a bit big and complicated. It is fitting to my need but I am looking and researching a bit more to get some good and more easy to use system. If anyone knows such web based system or tool, please share.

    Read the article

  • Is it safe to install Compiz Experimental Plugins 0.1.1 on Maverick?

    - by litvin05
    Does anyone have these plugins installed? Sorry, but I'm worried, because my past attempts to update compiz have failed, and when I try to install these plugins they ask to me to update these files: compiz-dev compiz-fusion-bcop debhelper html2text intltool-debian libcairo-gobject2 libcairo2-dev libdecoration0-dev libdrm-dev libexpat1-dev libfontconfig1-dev libfreetype6-dev libgl1-mesa-dev libglu1-mesa-dev libice-dev libkms1 libmail-sendmail-perl libpango1.0-dev libpixman-1-dev libpng12-dev libsm-dev libstartup-notification0-dev libsys-hostname-long-perl libx11-xcb-dev libxcb-render0-dev libxcb-shm0-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxext-dev libxfixes-dev libxft-dev libxinerama-dev libxml2-dev libxrandr-dev libxrender-dev libxslt1-dev libxss-dev mesa-common-dev po-debconf x11proto-composite-dev x11proto-damage-dev x11proto-fixes-dev x11proto-randr-dev x11proto-render-dev x11proto-scrnsaver-dev x11proto-xext-dev x11proto-xinerama-dev Please answer my question, and I'll be very grateful! These Plugins are here

    Read the article

  • Read Only usb stick that won't let me do anything to it

    - by Jonathon
    Somehow I messed up and accidentally made my usb stick into a read only file system. I have tried a bunch of things to delete the files, including the basic (rm -f myfile) and attempting to allow writing (sudo chmod +w myfile) and then deleting, but none of this seems to work. Any ideas on what I can do. I don't have anything on the usb stick that I need, but I don't want to throw away an otherwise perfectly good piece of equipment. How can I make it work? Am I going about this completely the wrong way?

    Read the article

  • Create bootable USB install image from command line?

    - by j-g-faustus
    I'm trying to create a bootable USB image to install Ubuntu on a new computer. I have done this before following the "create USB drive" instructions for Ubuntu desktop, but I don't have an Ubuntu desktop available. How can I do the same using only the command line? Things I've tried: Create bootable USB on Mac OS X following the ubuntu.com "create USB drive" instructions for Mac: Doesn't boot. usb-creator: According to apt-cache search usb-creator and Wikipedia usb-creator only exists as a graphical tool. "Create manually" instructions at help.ubuntu.com: None of the files and directories described (e.g. casper, filesystem.manifest, menu.lst) exist in the ISO image, and I don't know what has replaced them. (At my disposal is Mac OS X and Ubuntu server; I have neither Ubuntu desktop nor Windows.)

    Read the article

  • Nexus 7 (4.2.2) stuck as read-only on Ubuntu 13.04 (PC)

    - by Dalladubb
    I have a Nexus 7 running the latest Android (4.2.2) that seems to be stuck as read-only. I cannot transfer any files to or from the device though I am free to look through it. Permissions are: View Content: Only Owner Change Content: Nobody Access Content: Nobody And when I try to change the permission I get this error: Operation not supported by backend I'm baffled. This is a stock install of Ubuntu on my PC and the install isn't that old. Am I missing a lib or something? I feel the need to say it works fine on Windows 7. Thanks for looking.

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • ReSharper File Location

    - by Ben Griswold
    By default, the ReSharper cache is stored in the solution folder.  It’s one extra folder and one extra .user file.  It’s no big deal but it does clutter up your solution a bit – especially since the files provide no real value. I prefer to store the ReSharper cache in the system Temp folder.  This setting is available by visiting ReSharper > Options > Environment > General. Just update where you’d like to store the ReSharper cache and you’re good to go.  Note, the .user file continues to linger around the solution folder but at least the _ReSharper.SolutionName folder is moved out of sight.

    Read the article

  • Sysadmin Nightmares – Server Room Disasters [Videos]

    - by Asian Angel
    There you are, looking at a pristine server room when disaster suddenly strikes! Whether it is fire, floods, or other causes you will feel sympathy for the sysadmins involved when watching this collection of seven server room disasters that Wired has put together. You can view the other six videos in the collection by visiting the Wired post linked below… Server Snuff: 7 Videos of a Sysadmin’s Worst Nightmares [via Fail Desk] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • 3 Ways to Make Steam Even Faster

    - by Chris Hoffman
    Have you ever noticed how slow Steam’s built-in web browser can be? Do you struggle with slow download speeds? Or is Steam just slow in general? These tips will help you speed it up. Steam isn’t a game itself, so there are no 3D settings to change to achieve maximum performance. But there are some things you can do to speed it up dramatically. Speed Up the Steam Web Browser Steam’s built-in web browser — used in both the Steam store and in Steam’s in-game overlay to provide a web browser you can quickly use within games – can be frustratingly slow on many systems. Rather than the typical speed we’ve come to expect from Chrome, Firefox, or even Internet Explorer, Steam seems to struggle. When you click a link or go to a new page, there’s a noticeable delay before the new page appears — something that doesn’t happen in desktop browsers. Many people seem to have made peace with this slowness, accepting that Steam’s built-in browser is just bad. However, there’s a trick that will eliminate this delay on many systems and make the Steam web browser fast. This problem seems to arise from an incompatibility with the Automatically Detect Proxy Settings option, which is enabled by default on Windows. This is a compatibility option that very few people should actually need, so it’s safe to disable it. To disable this option, open the Internet Options dialog — press the Windows key to access the Start menu or Start screen, type Internet Options, and click the Internet Options shortcut. Select the Connections tab in the Internet Options window and click the LAN settings button. Uncheck the Automatically detect settings option here, then click OK to save your settings. If you experienced a significant delay every time a web page loaded in Steam’s web browser, it should now be gone. In the unlikely event that you encounter some sort of problem with your network connection, you could always re-enable this option. Increase Steam’s Game Download Speed Steam attempts to automatically select the nearest download server to your location. However, it may not always select the ideal download server. Or, in the case of high-traffic events like big seasonal sales and huge game launches, you may benefit from selecting a less-congested server. To do this, open Steam’s settings by clicking the Steam menu in Steam and selecting Settings. Click over to the Downloads tab and select the closest download server from the Download Region box. You should also ensure that Steam’s download bandwidth isn’t limited from here. You may want to restart Steam and see if your download speeds improve after changing this setting. In some cases, the closest server might not be the fastest. One a bit farther away could be faster if your local server is more congested, for example. Steam once provided information about content server load, which allowed you to select a regional server that wasn’t under high-load, but this information no longer seems to be available. Steam still provides a page that shows you the amount of download activity happening in different regions, including statistics about the difference in download speeds in different US states, but this information isn’t as useful. Accelerate Steam and Your Games One way to speed up all your games — and Steam itself —  is by getting a solid-state drive and installing Steam to it. Steam allows you to easily move your Steam folder — at C:\Program Files (x86)\Steam by default — to another hard drive. Just move it like you would any other folder. You can then launch the Steam.exe program as if you had never moved Steam’s files. Steam also allows you to configure multiple game library folders. This means that you can set up a Steam library folder on a solid-state drive and one on your larger magnetic hard drive. Install your most frequently played games to the solid-state drive for maximum speed and your less frequently played ones to the slower magnetic hard drive to save SSD space. To set up additional library folders, open Steam’s Settings window and click the Downloads tab. You’ll find the Steam Library Folders option here. Click the Add Library Folder button and create a new game library on another hard drive. When you install a game in Steam, you’ll be asked which library folder you want to install it to. With the proxy compatibility option disabled, the correct download server chosen, and Steam installed to a fast SSD, it should be a speed demon. There’s not much more you can do to speed up Steam, short of upgrading other hardware like your computer’s CPU. Image Credit: Andrew Nash on Flickr     

    Read the article

  • Multi-module web project with Spring and Maven

    - by Johan Sjöberg
    Assume we have a few projects, each containing some web resources (e.g., html pages). parent.pom +- web (war) +- web-plugin-1 (jar) +- web-plugin-2 (jar) ... Let's say web is the deployable war project which depends on the known, but selectable, set of plugins. What is a good way to setup this using Spring and maven? Let the plugins be war projects and use mavens poor support for importing other war projects Put all web-resource for all plugins in the web project Add all web-resources to the classpath of all jar web-plugin-* dependencie and let spring read files from respective classpath? Other? I've previously come from using #1, but the copy-paste semantics of war dependencies in maven is horrible.

    Read the article

  • What GUI are there for Axel or for other such downloaders that use multiple connections?

    - by cipricus
    In order to enjoy my maximum download speed, I use and like Axel very much, but from time to time I download multiple files and having so many windows opened has some disadvantages. I use Axel with FlashGot in Firefox (Seamonkey etc) but I would like to add a GUI for that, and possibly have multiple downloads in a nice list as in any civil downloader. I am not aware of a GUI for Axel that works. Axel-kapt crashes. (A question on how to use it properly in Ubuntu got only one somewhat dismissive answer by yours truly...) Gaxel just opens a window with empty fields that I have to manually fill (which beats the purpose). I would like to know how to install something like Gwget which is described here, in an old answer as an alternative (but Gwget itself might be too old too). Help!

    Read the article

  • Embed audio broadcasting on web page

    - by giargo
    Hi, I'd like to embed simple audio player on my webpage and I want it to get the audio from a stream broadcasted from my server. I read I can use IceCast on my web-server, getting an audio stream from a client using IceS (or this is what i got from other questions and articles) but once I have my stream, IceCast is supposed to broadcast it on an URL, that can be opened from pkayers like winamp or similar. I've found out this is quite a rare topic, usually people just want to broadcast "radio" where files are taken from a static playlist. In this case I have to get a stream from an IceCast URL and embed it with a player on a web page. Thank.

    Read the article

< Previous Page | 892 893 894 895 896 897 898 899 900 901 902 903  | Next Page >