Search Results

Search found 8358 results on 335 pages for 'external harddrive'.

Page 28/335 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Cheap NAS for connecting a USB drive to the network

    - by olaeld
    I am looking for a cheap way of hooking up a USB drive to the LAN. This could be a router with a USB port, or possibly a more regular NAS device with a USB port. I am hoping to find something below $200. But if there is a device with "magical properties" I may be interested to hear about it anyway.

    Read the article

  • USB 3.0 randomly disconnecting and reconnecting

    - by user1624552
    I had an old laptop that died so I have bought a new one. I have taken the 2.5 SATA hard drive from the old laptop and I have put it into an external 2.5 SATA enclosure usb 3.0 and I connect it to the new laptop. My new laptop has Windows 8 64 bit installed. When I connect the external hard drive to the new laptop throught USB 3.0 port, it gets randomly disconnecting and reconnecting continuously, even when I am not using it. Also happens if I connect to another usb 3.0 port. Also I have observed that If I connect the external hard drive to a USB 2.0 port instead of an USB 3.0 port all work ok, no randomly disconnection and reconnection occurs. It only happens when I connect it to an USB 3.0 port. Some ideas to solve this issue?

    Read the article

  • use ubuntu from a removable drive to work both on laptop and desktop computer

    - by moragos
    my laptop, which i run ubuntu on, is getting a bit old and i find it's getting slower and slower at running applications. My desktop computer is stronger, but I can't give up on the portability of my laptop. I was thinking of installing a HD drawer for both my laptop and desktop. and when I come home just pull the HD from the laptop and plug it into the desktop. I wanted to ask if anyone have tried it or have any inputs on the idea

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • VoIP setup for one external PSTN line

    - by Jcl
    I'm completely new to VoIP and the likes, and I'm trying to find information about what could be the best setup for this. I need 4 (maybe more in the future, but maximum 5 or 6) wireless extensions, connected to 1 PSTN line, and maybe 2 in the future. I've been trying to gather information about the gear needed but everything I find seems too much over-the-top (and extremely expensive). The main problem is that the physical place we are on doesn't have possibilities of having a decent internet connection, so using a external VoIP "virtual PBX" is not an option. Thing is, even if small, phone is critical to this organization. I currently have an analog DECT/GAP PBX which does what I need, however the PBX is very bad and the call quality is horrible, and that's why I want to change it. The requirements would be: 4 wireless terminals (routing cable is not an option), all of them ringing on incoming PSTN calls. Ability to do internal calls (4 separate offices) and ability to pass calls between terminals. The 4 terminals should be able to access the external PSTN line without dialing any special codes. Very important: terminals should be able to issue commands on the PSTN line to the external operator in the form *nn*nnnnnnnn# . Don't know wether this could face to be a problem, but I've had problems with analog PBX which would take any * as a PBX command and wouldn't allow terminals to send it to the external lines. Not so important, but would be nice to have: call waiting music Could anyone recommend such a setup? I need to be able to do this on a EXTREMELY LIMITED budget (that is: I don't have a limit, but all should get as much to zero as possible). I have enough spare powerful computers and a 300mbps wireless network which works just fine, so that's not to include in the budget. Don't really know if this is the best place to ask, but it's the most StackExchange-related site I've found to this subject.

    Read the article

  • Windows 7: Setting up backup to an external hard drive on another computer on the network

    - by seansand
    I have an external hard drive connected to a Windows 7 (Home Edition) computer. I have another computer (with Windows 7 Ultimate), and I want to have the Windows 7 Ultimate back up to the same external hard drive, without having to disconnect and move the external hard drive from the Home Edition PC. When I get to the "Set up backup" dialog within Windows 7, it asks me where to save the backup. I select "Save on a network". However, when I enter "\\computername\harddrivename" under Browse, the "OK" button remains grayed out. The button remains grayed out unless I also enter a Username and Password under "Network credentials". However, the account I have on the other computer doesn't have a password for it. To un-gray out the button I must enter a fake password, allowing me to click "OK", but then obviously I get a "bad password" error. Does anyone know how to get around this problem? (Seems kind of ridiculous.) I made sure that the security settings with the external hard drive on the other computer are full access to Everyone, so permissions is not the problem. I also thought about using Homegroup instead of the regular security settings, but there is no obvious way to go about it that way, either.

    Read the article

  • Ubuntu 12.04 partioning an external drive without lossing data

    - by Menelaos Perdikeas
    I have an Ubuntu 12.04 with an external 1.5T disk (just for data). It is /dev/sdc1 seen below: $ df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/sda1 ext4 1451144932 27722584 1350794536 3% / udev devtmpfs 6199460 4 6199456 1% /dev tmpfs tmpfs 2482692 988 2481704 1% /run none tmpfs 5120 0 5120 0% /run/lock none tmpfs 6206724 284 6206440 1% /run/shm /dev/sdc1 fuseblk 1465135100 172507664 1292627436 12% /media/Elements The thing is I would like to implement this rsync-based backup strategy and I want to use my /dev/sdc1 external drive for that. Since the guide mentioned above recommends placing the backup directory in a separate partition I want to repartition the /dev/sdc1 external hard disk but retain existing data in a separate partition. E.g. split /dev/sdc1 into two partitions: (i) one to be used exclusively for the rsync-based backup and (ii) the other for the existing miscellaneous data. How should I go about partitioning with minimal risk to my existing data and what kind of filesystem do you recommend? I would prefer a console-based guide but unfortunately all the material I found on the web is oriented towards partitioning the main (bootable) disk and not an external fuseblk filesystem used only for passive data.

    Read the article

  • Postfix not delivering from external senders and not logging anything

    - by simendsjo
    Some semi-recent upgrades must have broken my postfix+dovecot configuration, but I'm having problems finding out what the cause is. My domain is simendsjo.me with the MX record mail.simendsjo.me. I can send mail to both local and external recipients, and it delivers mail from internal mailboxes. The problem is that mail from external senders isn't delivered, and nothing is logged at all. The external sender also doesn't receive any errors. I have no idea where to ever start looking as nothing is logged at all when external mail is sent to my server. So the first issue would be: How can I turn on some debug messages for postfix? I've tried: debug_peer_level = 2 debug_peer_list = simendsjo.me .. And _level = 999 and _list = gmail.com where I'm trying to send emails from. but nothing is logged. When sending mails from a local mailbox (but from an outside computer, not localhost), a lot is logged. I don't have any rules in iptables either. Any ideas how I can get some debug messages for postfix?

    Read the article

  • How and when do browsers implement real-time changes to a document's DOM?

    - by Mark
    My website dynamically embeds an external Javascript file into the head tag. The external Javascript defines a global variable myString = "data". At what point does myString become accessible to Javascript within the website? <html> <head> <script type="text/javascript"> myString = null; external = document.createElement("script"); //externalScript.js is one line, containing the following: //myString = "data"; external.setAttribute("src", "externalScript.js"); external.setAttribute("type", "text/javascript"); document.getElementsByTagName("head")[0].append(external); alert(myString); <script> </head> <body> </body> </html> This code alerts null (when I thought it would alert "data") in Chrome and IE, even though the DOM has loaded in externalScript.js at this point. When is externalScript.js actually evaluated by the browser and at what point do I have access to the new value of myString?

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • Can and should UDF be used as a hard drive format?

    - by dlamblin
    Several time recently I've seen UDF suggested as the solution to a cross platform format for a drive used on Linux, Mac OS X and Windows XP and above. I've searched here and not found the same suggestion (most are suggesting ntfs-3g which seems to cost money and isn't preinstalled on a Mac). So my question is: how is this done right, and has anyone done this? Have you then filled up the drive and deleted some files to make space finding that everything works like a real r/w format even though it seems to have been primarily a write once format?

    Read the article

  • power management of USB-enclosed hard drives

    - by intuited
    With a typical USB hard drive enclosure, is the full range of drive power management functionality available? In what may be an unrelated matter: is it possible to suspend a PC without unmounting an attached USB-powered drive, and then remounting it on resume? This is the behaviour I'm currently seeing (running Ubuntu linux 10.10). Are there certain models or brands that provide more complete control over this aspect of drive operation? My Friendly Neighbourhood Computer Store carries (part of) the Vantec Nexstar product line.

    Read the article

  • With a typical USB hard drive enclosure, is the full range of drive power management functionality available?

    - by intuited
    In what may be an unrelated matter: is it possible to suspend a PC without unmounting an attached USB-powered drive, and then remounting it on resume? This is the behaviour I'm currently seeing (running Ubuntu linux 10.10). Are there certain models or brands that provide more complete control over this aspect of drive operation? My Friendly Neighbourhood Computer Store carries (part of) the Vantec Nexstar product line.

    Read the article

  • Keeping monitor Dell desk monitor 'connected' to dell studio 15 laptop?

    - by Jerry
    First of all, I am new to Ubuntu 10.04 but it is love at first sight and the only windows I will see again are in my house and car! Each time I disconnect my Dell Studio 15 from my Dell 36" monitor, I have to reconnect through the System/Monitor protocol. Question: Is there a way to set it up so once I slide my laptop under the stand, reconnect monitor cable, 3 usb's and press start that the Monitor screen will go 'live' without having to start all over?

    Read the article

  • How long does an ext4 format take?

    - by Bill O'Dwyer
    The USB cable on my Iomega Prestige 1TB hard drive conked out a while back, and I've finally managed to get a new one. I removed the old NTFS file system because I use Windows maybe once a month, and then only for Windows-only activities. So I plug in the HDD to my laptop, and get it to start converting to ext4. Gparted is currently on the "create new ext4 file system" and has been for about 2 hours. Is this right? I know 1TB is fairly large, but the last time I did this, I'm pretty sure it was a fast(er) job.Can anybody shed some light on what's going on here?

    Read the article

  • front usb wont mount harddrives, internal usb ports do

    - by Thesgsuser
    I have noticed something in my new build, i am using Ubuntu desktop newest version my motherboard is the asus f1a75-m pro R2.0 with the usb ports in the back all my NTFS hard disks or usb sticks work fine, but then.. when i put them in the front usb ports of my chassis (silverstone milo ml-03) they wont mount... I have 2 usb 3.0 ports in front of the case connected with a internal usb 3.0 header. But i verified that the usb 3.0 ports on the back do mount the harddisk so it has nothing to do with usb 3.0 i think. The strange thing is, my mouse works fine on the front usb ports. Every usb hardware piece seems to work except if it has any memory inside it :( What seems to be the problem?

    Read the article

  • Ubuntu 11.10 Intel controller VGA detected but no signal

    - by Fred Zimmerman
    Ubuntu 11.10 Dell vostrum Intel® Sandybridge Mobile x86/MMX/SSE2 Displays ... correctly detects and identifies Viewsonic 27" VGA monitor, but monitor says it's receiving no signal Plugged into another monitor (Sony 20"), same result. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) I've browsed these forums and tried everything that I can but nothing has worked. I need a stepwise troubleshooting plan.

    Read the article

  • 13.04 - Extenal monitor plugged on laptop HDMI shows image but lags a lot

    - by Thiago Fassina
    It uses an Nvidia GeForce GTS 250M. When I plug in the monitor, it shows some images but with bad FPS and lagging a lot. For example, when I move the mouse pointer, it leaves a trail that vanishes from time to time. I didn't install Nvidia drivers because every time I did this on past versions I ended up with an unbootable system. On past versions I just gave up trying to use the monitor cause it never showed anything regardless of what driver I installed. But since 13.04 ALMOST made it, I'll make another tries. ps.: Works perfectlty on Windows 7/8

    Read the article

  • Grub2 attempting to boot hd1 when it should boot hd0

    - by JoBu1324
    I'm attempting to perform a "normal" install on a USB3 SSD (I don't know if it is noteworthy, but I don't have a swap partition). The installation proceeds normally (I'm installing from a USB2 device I created using LiLi Boot, with a copy of Ubuntu 12.10 64bit that I downloaded directly from the source. The system I'm running Ubuntu on has had a more traditional installation of ubuntu running on it without issue (also 12.10), so I know that everything works A-OK when booting from a 7200RPM internal disk. There are a number of oddities that I've noticed so far, including graphics corruption, but the first and most pressing issue is that Grub2 refuses to recognize the correct hd. From /boot/grub/grub.cfg: if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_msdos insmod ext2 set root='hd1,msdos1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1 --hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1 b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 else search --no-floppy --fs-uuid --set=root b58ee4f7-d41d-400a-b7b8-18bd1f0ae9d3 fi font="/usr/share/grub/unicode.pf2" fi This is from a 100% fresh install of linux (first boot), which was installed while no hard drives were connected to the system, other than the USB2 LiLi drive. The system refuses to boot unless I change the hd1,msdos1 - hd0,msdos1 in the grub menu at boot, when it is the only disk device connected to the PC. What options are left for me to troubleshoot this issue? I've been racking my brains and taxing the internet trying to dig up something on this problem, but now I'd like to see if the Ubuntu community can rise to the challenge and help me fix this boot problem. This is the second time I've attempted this particular setup. The first time, after days of wasted time, I managed to get it to boot every other boot - i.e. every even boot it would boot into Ubuntu like it was happy; every odd boot it would boot into the BusyBox or Grub prompt. At one point it complained that it couldn't find /dev/disk/by-uuid/[the disk], which I found most perplexing, since the disk was there and booted before and after the occurrence (with intervention).

    Read the article

  • Two problems, both with either lock or suspend.

    - by user199208
    I recently did two things that may be the cause of either of them. 1) I started using a second screen 2) I installed xscreensaver instead of the stock, power saving option. My first question is about suspend: every time I wake from suspend, networking is disabled. While I've seen multiple posts about that on here, it didn't start happening until recently. Second, lock seems to be disabled now. Are both of these problems a bug, or was this caused by the second screen or maybe xscreensaver?

    Read the article

  • Turn Off Sleep on Seagate GoFlex?

    - by RPG Master
    I bought this 2tb Segate GoFlex this last Black Friday and since then every 15 minutes or so the drive spins down, and then a little while later completely dismounts. Very annoying. From what I understand you could turn this off using the including Windows and Mac only software. This function and what controls it isn't proprietary, right? There has to be something that'll let me set it in Ubuntu... Anyone have any suggestions? Also, I formatted it to EXT4. Hope I didn't screw myself up. :/

    Read the article

  • Get foreign key in external content type to show up in list view

    - by Rob
    I have come across blogs about how to setup an external content type (e.g. http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/02/02/it-s-easy-to-configure-an-external-list-with-business-connectivity-services-bcs-in-sharepoint-foundation-2010.aspx) but I have not seen any examples of what to do when your external SQL DB has foreign keys. For example. I have a database that has orders and customers. An order has one and only one customer and a customer can have many orders. How can I setup external content types in such a way that when in the list view of these external content types, I can jump between and possible lookup values to that other type?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >