Search Results

Search found 3160 results on 127 pages for 'min soo kim'.

Page 38/127 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Entity and pattern validation vs DB constraint

    - by Joerg
    When it comes to performance: What is the better way to validate the user input? If you think about a phone number and you only want numbers in the database, but it could begin with a 0, so you will use varchar: Is it better to check it via the entity model like this: @Size(min = 10, max = 12) @Digits(fraction = 0, integer = 12) @Column(name = "phone_number") private String phoneNumber; Or is it better to use on the database side a CHECK (and no checking in the entity model) for the same feature?

    Read the article

  • CSS hack for Google Chrome and Safari

    - by Renso
    When wanting to hack css in an external stylesheet just for Google Chrome and Safari. Here is an example of where I override the margin-top for Chrome and Safari.Normal:#AccountMaintenanceWrapper #callDetailsPreviewWrapper{    border: none;    padding: 0px;    width: 209px;    position: fixed;    margin-top: 84px;    z-index: 1;}Google Chrome and Safari:@media screen and (-webkit-min-device-pixel-ratio:0){    #AccountMaintenanceWrapper #callDetailsPreviewWrapper    {        margin-top: 12px;    }}

    Read the article

  • Estimates, constraint and design [closed]

    - by user65964
    For your next two software projects (assuming that you're getting programming assignments, otherwise consider the program to find the min and max of a set of rational numbers) estimate how much effort they would take before doing them, then keep track of the actual time spent. How accurate were your estimates? State the requirements, constraint, design, estimate (your original estimate and the actual time it took), implementation (conventions used, implement/test path followed.

    Read the article

  • Interesting fact #123423

    - by Tim Dexter
    Question from a customer on an internal mailing list this, succintly answered by RTF Template God, Hok-Min Q: Whats the upper limit for a sum calculation in terms of the largest number BIP can handle? A: Internally, XSL-T processor uses double precession.  Therefore the upper limit and precision will be same as double (IEEE 754 double-precision binary floating-point format, binary64). Approximately 16 significant decimal digits, max is 1.7976931348623157 x 10308 . So, now you know :)

    Read the article

  • Linux partitioning problem

    - by Claudiu
    I am using cfdisk to repartition my hdd as from OS install I only got 1 big partition a swap. I wanted to resize the big partition to 1 GB /boot and use the rest of the space for an extended partition. After I do cfdisk, I recheck the partitions with fdisk -l and I get these: Disk /dev/sda: 320 GB, 320070320640 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda3 1 38455 308881755 f Extended LBA Warning: Partition 3 does not end on cylinder boundary. /dev/sda2 38455 38698 1951897 82 Linux swap /dev/sda1 * 38699 38913 311349654 83 Linux My problem is the Warning message, I think I know the cause, I think its because of sda1 Blocks size. How could that be soo big if Start and End interval is small?

    Read the article

  • My control key doesn't work, how do I fix it??

    - by Blaine LaFreniere
    My control key on the right doesn't work how it should. E.g. Right ctrl + T won't open new tabs in firefox, right ctrl + w won't switch windows in vim, etc. I know the key isn't physically broken, because xev shows that the right ctrl key generates events, but it just isn't responding as I expect it to in applications. Screenshot: http://i46.tinypic.com/33w1h76.png I tried Kim's answer but it still doesn't work. blaine@blaine-laptop ~ $ xmodmap -pke | grep 105 keycode 105 = Control_R Control_R Control_R Control_R Control_R Tried to map as Control_L as well, didn't work. The computer is a laptop, I am unable to plug the keyboard in to another computer.

    Read the article

  • How to slave a 3.5" 500gb sata external hdd(Western Digital) to my dell Inspiron laptop

    - by AJ HDD
    i have a 3.5" western digital my book 500gb external hdd....i gave it to a friend of mine and he broke the usb port in it...i went to a nearby comp repair shop and had him solder the thing...it didnt detect when i plugged it in my dell inspiron laptop........i recently saw about the 3.5" sata to usb enclosure, so i went to check it....strangely when its placed in the enclosure, its not detecting in windows...also when it was put as secondary(im guessing slave) to the shop fellow's desktop it shows up in the bios and starting, but then again doesnt show in windows 7, and the guy told me i need to data recovery to get teh data back p.s.the wd hdd doesnt have os, just data, soo my question is: can i slave the drive to my dell laptop and try to recover the data....if so how? please i would really appreciate it...thanks in advance

    Read the article

  • IPSec 2 hosts (preshared key) - network shares very slow

    - by LxFlip
    I'm testing a IPSec config between 2 hosts, using ipsec auth with preshared key, very simple configuration. (I want to start with a IPSec simple preshared key config, and then step up to a Certificate or kerberos...) The problem is: The connection is working but when accessing network file shares the first time it's very slow. On the same host i'm testing the shares, i have an IIS site running, and the performance seems very normal, fast. Does anybody know why does SMB shares are soo slow? Is there any ipsec policy options that should be tweaked? Thanks

    Read the article

  • Upgrade to 2008 R2

    - by DavidWimbush
    I don't like it, Carruthers. It's just too quiet. Well, I've done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don't actually get upgraded until you recover them but there was no noticable delay. It's gone from 2005 Workgroup - limited to 4GB of memory - to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It's soo much faster. The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service's credentials for database access when running reports. It didn't preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade. All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

    Read the article

  • New Agile PLM Customer Testimonial Videos on YouTube

    - by Kerrie Foy
    Have you visited the Oracle Agile PLM channel on YouTube recently? There are many new video testimonials, and even an overview of how Oracle Agile PLM helps companies drive powerful corporate performance by maximizing product profitability. Here are a few highlights... Oracle Agile PLM: Proven Results Watch an overview of the transformative success our customers have realized using Oracle Agile PLM applications to take their company to the next level. Alcatel-Lucent Ups Competitive Edge with Oracle Agile PLM and Oracle EBS Brad Magnani of Alcatel-Lucent Enterprise describes how the Oracle Agile PLM and Oracle EBS solutions help speed time to market, eliminate wasted cash, secure data, and ensure product quality, enabling innovation and success. Herbalife: an Oracle Agile PLM Customer Video Filmed at OpenWorld 2010 Listen to Gary Swanson of Herbalife describe how his organization realizes powerful new insight into product information with Agile PLM Business Intelligence (BI). Tyson: an Oracle Agile PLM for Process Customer Video Filmed at OpenWorld 2010, featuring Kim Glenn Tyson: an Oracle Agile PLM for Process Customer Video Filmed at OpenWorld 2010, featuring Amber Woods We are so proud to have two testimonials from Tyson Foods! Tune in to each to see the unique perspectives on Agile PLM for Process at Tyson from different organizational views, demonstrating Oracle's ability to enable enterprise-wide PLM implementations delivering superior results. Take a moment to view these interesting customer testimonials to learn how Oracle Agile PLM applications are helping companies succeed. Subscribe to our YouTube channel today!

    Read the article

  • What You Said: How Do You Sync Your Files Between Your Devices?

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tricks and techniques for keeping files synced between your different devices. Now we’re back to highlight how you do it. Overwhelmingly, you do it with Dropbox. Despite the proliferation of different platforms there has been little inroads made into any sort of universal syncing. We heard from quite a few different readers and by far the most popular option was to use Dropbox to ensure that you could get the music and documents you wanted whether you were on your desktop, laptop, netbook, iPhone, or Android device. In the same breath however, nearly all of your added on an additional service. The real message, it would seem, is that there simply isn’t a service good enough to meet all of the needs most users have, all of the time. The most common response to our Ask the Readers question was “Dropbox and…”; this pattern is illustrated nicely in the following quotes. Kim writes: Dropbox for all kinds of things. (Would also use Sugarsync, but it doesn’t support Linux.) Lastpass for passwords. Xmarks for bookmarks, although I’m going to try Firefox Sync soon. Evernote for things like shell commands I might want someday. Google Beta for music, once I get it uploaded. I have an Amazon account too, but Google gives you more space. Gmail. Michael finds himself in a similar situation and writes: How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • 24 Hours of PASS scheduling

    - by Rob Farley
    I have a new appreciation for Tom LaRock (@sqlrockstar), who is doing a tremendous job leading the organising committee for the 24 Hours of PASS event (Twitter: #24hop). We’ve just been going through the list of speakers and their preferences for time slots, and hopefully we’ve kept everyone fairly happy. All the submitted sessions (59 of them) were put up for a vote, and over a thousand of you picking your favourites. The top 28 sessions as voted were all included (24 sessions plus 4 reserves), and duplicates (when a single presenter had two sessions in the top 28) were swapped out for others. For example, both sessions submitted by Cindy Gross were in the top 28. These swaps were chosen by the committee to get a good balance of topics. Amazingly, some big names missed out, and even the top ten included some surprises. T-SQL, Indexes and Reporting featured well in the top ten, and in the end, the mix between BI, Dev and DBA ended up quite nicely too. The ten most voted-for sessions were (in order): Jennifer McCown - T-SQL Code Sins: The Worst Things We Do to Code and Why Michelle Ufford - Index Internals for Mere Mortals Audrey Hammonds - T-SQL Awesomeness: 3 Ways to Write Cool SQL Cindy Gross - SQL Server Performance Tools Jes Borland - Reporting Services 201: the Next Level Isabel de la Barra - SQL Server Performance Karen Lopez - Five Physical Database Design Blunders and How to Avoid Them Julie Smith - Cool Tricks to Pull From Your SSIS Hat Kim Tessereau - Indexes and Execution Plans Jen Stirrup - Dashboards Design and Practice using SSRS I think you’ll all agree this is shaping up to be an excellent event.

    Read the article

  • Open Source Project all dressed up but nowhere to go...

    - by Calanus
    Over the past 2 years myself and a colleague have built an online statistical analysis application using a mixture of silverlight, wcf and R. I (a c# programmer) wrote all the silverlight and wcf stuff whilst my colleague (a statistician) came up with the stats algorithms and wrote the R code. Now we think that this app is fairly unique - a rich gui online statistics application that is much more intuitive than all the other online stat apps that I've seen. But despite this we don't really know where to go with the project, mainly for the following reasons: 1) Its fairly complicated stuff - without the mix of programing and stats skills it would be difficult for anyone to "get into" the project and contribute. 2) We are stalled by a lack of a proper place to host the site. Currently it sits on the family windows 7 media centre, not exactly the best place to host it as it could interfere with the missus trying to watch Corrie/Friends/Oprah etc. Soo, anyone got any ideas on how to move forward with this? I guess that my strength is programing not marketing so despite working hard at this for the past couple of years I feel that I've reached a dead end! Also, does anyone know of any free windows hosting for open source projects? If I could find a proper place to put the app I might feel re-energised about the whole thing. The source code is on codeplex at: http://silverstats.codeplex.com, whilst the app is currently hosted at http://silverstats.co.uk

    Read the article

  • Ubuntu running like mud, system hangs and looks like its running at 3FPS

    - by user240803
    the system specks: AMD XP 3200+ 1GB DDR 333 RAM 160 GB HD IDE NVIDIA FX 5500 AGP Video Card Compaq Presario sr1230nx the system takes forever to boot and when it does it runs like total mud, reminds me of an overloaded system that has too many windows open or something... fresh install tried soo many thing like new memory (it had 512 stick) a new video card (onboard 8mb sis sounded like the problem, but wasn't... has gotten a little faster now but not by much) tried to disable all the things on the motherboard that could be, with no help... this machine runs windows XP, 7, and 8 JUST FINE!!! I mean for a single core CPU WIN8 runs AWESOME!!! BUT I already have a Gaming Desktop that has Windows 8 pro I want a Linux machine to get some time in and learn a few things... I want Ubuntu because of the Software center so I can install things I want until I am familiar with the command line.. I've worked on Computers since I was 12 I remember some of the DOS commands but I guess these are a little different... anyway any ideas? Ive also tried both drivers for the NVIDA card and that didn't help either... its not the card since it did this with both the NVIDA card and the SIS onboard... it also does this on live mode with the USB so I don't think its the HardDrive... I'm running out of options of hardware to try... I know this version of Linux works cuz Ive booted it on other machines and it ran great... what is with this Compaq? here is a vid of exactly what its doing... let me know if you need anything else I am right by the comptuer tonight so ask anything... http://youtu.be/-P-XNo81098

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • jboss 5.1 mysql connection pooling

    - by boyd4715
    I am using JBOSS 5.1.0.GA, MySQL 5.5 and Hibernate 3.3.1 GA (included with JBOSS) + Spring. My question is do I need to add c3p0 as a data source in my spring/hibernate configuration for connection pooling or are the setting in the JBOSS mysql-ds.xml setting enough. My mysql-ds.xml is the following: <datasources> <local-tx-datasource> <jndi-name>MySqlDS</jndi-name> <connection-url>jdbc:mysql://localhost:3306/ecotrak</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>ecotrak</user-name> <password>ecotrak</password> <min-pool-size>5</min-pool-size> <max-pool-size>20</max-pool-size> <idle-timeout-minutes>5</idle-timeout-minutes> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name> <!-- should only be used on drivers after 3.22.1 with "ping" support --> <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name> <!-- sql to call when connection is created <new-connection-sql>some arbitrary sql</new-connection-sql> --> <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql> --> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) --> <metadata> <type-mapping>mySQL</type-mapping> </metadata> </local-tx-datasource> </datasources>

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Nginx Slower than Apache??

    - by ichilton
    Hi, I've just setup 2x identical Rackspace Cloud instances and am doing some comparisons and benchmarks to compare Apache and Nginx. I'm testing with a 3.4k png file and initially 512MB server instances but have now moved to 1024MB server instances. I'm very surprised to see that whatever I try, Apache seems to consistently outperform Nginx....what am I doing wrong? Nginx: Server Software: nginx/0.8.54 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 2.320 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3612000 bytes HTML transferred: 3400000 bytes Requests per second: 431.01 [#/sec] (mean) Time per request: 232.014 [ms] (mean) Time per request: 2.320 [ms] (mean, across all concurrent requests) Transfer rate: 1520.31 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 11 15.7 3 120 Processing: 1 35 76.9 20 1674 Waiting: 1 31 73.0 19 1674 Total: 1 46 79.1 21 1693 Percentage of the requests served within a certain time (ms) 50% 21 66% 39 75% 40 80% 40 90% 98 95% 136 98% 269 99% 334 100% 1693 (longest request) And Apache: Server Software: Apache/2.2.16 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 1.346 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3647000 bytes HTML transferred: 3400000 bytes Requests per second: 742.90 [#/sec] (mean) Time per request: 134.608 [ms] (mean) Time per request: 1.346 [ms] (mean, across all concurrent requests) Transfer rate: 2645.85 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 3.7 0 27 Processing: 0 3 6.2 1 29 Waiting: 0 2 5.0 1 29 Total: 1 4 7.0 1 29 Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 1 80% 1 90% 17 95% 19 98% 26 99% 27 100% 29 (longest request) I'm currently using worker_processes 4; and worker_connections 1024; but i've tried and benchmarked different values and see the same behaviour on all - I just can't get it to perform as well as Apache and from what i've read previously, i'm shocked about this! Can anyone give any advice? Thanks, Ian

    Read the article

  • Copy from CDROM is very slow in Ubuntu

    - by ???
    I'm using the command to copy CDROM image: # dd if=/dev/sr0 of=./maverick.iso But it's very slow, at about 350k bytes/sec. I've searched the google, and try the command # hdparm -vi /dev/sr0 /dev/sr0: HDIO_DRIVE_CMD(identify) failed: Bad address IO_support = 1 (32-bit) readonly = 0 (off) readahead = 256 (on) HDIO_GETGEO failed: Inappropriate ioctl for device Model=DVD-ROM UJDA775, FwRev=DA03, SerialNo= Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic } RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=0 (maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0 IORDY=yes, tPIO={min:180,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: sdma0 sdma1 sdma2 mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 *udma2 AdvancedPM=no Drive conforms to: ATA/ATAPI-5 T13 1321D revision 3: ATA/ATAPI-1,2,3,4,5 * signifies the current active mode Seems like DMA is already on. And a device test gives: # hdparm -t /dev/sr0 /dev/sr0: Timing buffered disk reads: 2 MB in 6.58 seconds = 311.10 kB/sec # sudo hdparm -tT /dev/sr0 /dev/sr0: Timing cached reads: 2 MB in 2.69 seconds = 760.96 kB/sec Timing buffered disk reads: m 4 MB in 5.19 seconds = 789.09 kB/sec The CD-ROM device and disc should be okay because I can copy it very fast in Windows, using UltraISO utility. So I guess there is something not configured right in Ubuntu, is it?

    Read the article

  • How flexible is the 'indirect' function?

    - by Chuck
    My curiosity pushes me to ask this question. If I were to have a series of functions that referenced a different column in a worksheet but all ended on the same row of data is there a way to point the 'row' part of a cell reference to a blank cell and use it has a variable to show the results of the functions up to a desired row simultaneously? Example: =Average('worksheet 1'.$A$1:'worksheet 1'.$A100) =Max('worksheet 1'.$B$1:'worksheet 1'.$B100) =Min('worksheet 1'.$C$1:'worksheet 1'.$C100) =Sum('worksheet 1'.$D$1:'worksheet 1'.$D100) Pseudo formulas... =Average('worksheet 1'.$A$1:'worksheet 1'.$A*('worksheet 2'.$A$1)*) =Max('worksheet 1'.$B$1:'worksheet 1'.$B*('worksheet 2'.$A$1)*) =Min('worksheet 1'.$C$1:'worksheet 1'.$C*('worksheet 2'.$A$1)*) =Sum('worksheet 1'.$D$1:'worksheet 1'.$D*('worksheet 2'.$A$1)*) Where 'worksheet 2'.$A$1 would only contain a number corresponding to a row in 'worksheet 1'. After stumbling upon and playing with the indirect() function I have only been able to replace the entire cell reference (Column and Row) with any success. The formula so far =SUM('worksheet 1'.C3:INDIRECT(A1)) Where A1 is on 'worksheet 2' and contains a full cell reference pointing to 'worksheet 1'. Any pointers?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Is it possible to trace the delegation path for a DNS lookup?

    - by Josh Glover
    I'm trying to determine why a Nagios host check is failing (hostnames and IPs have been changed to protect the guilty): : jmglov@laurana; host www.foo.com ;; connection timed out; no servers could be reached : jmglov@laurana; for ns in `grep -o '\([0-9]\+[.]\)\{3\}[0-9]\+$' /etc/resolv.conf`; do ping -qc 1 $ns; done PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 10.911/10.911/10.911/0.000 ms PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. --- 192.168.1.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms So I know that my nameservers are reachable, meaning that some nameserver along the delegation path to the authoritative nameserver for my host is not responding. Is there an easy way to determine which nameserver this is (basically a traceroute for DNS)?

    Read the article

  • Terrible noises from subwoofer of ACER Aspire 6930 with Realtek sound chip

    - by OneWorld
    After approximately 5-15 min of listening to music my subwoofer begins to make terrible noises. He's just "coughing". That began after 6 months I had this computer. Now I found out, that I can temporarily fix this problem by "restarting" the audio stream of the application that plays music. For example reloading last.fm page (reloads the flash file). Another way to reset the audio playback is switching the speaker configuration shown below in the screenshot. According to many posts on the internet like http://www.tomshardware.co.uk/forum/52918-20-acer-aspire-6935g-speaker-problem ACER support isn't any help Exchanging hardware doesn't fix the problem Even the later models have this problem Turning off the volume of the subwoofer is not an option to me. I still have warranty (I bought an extension of one year). I already tried about 15 versions of the Realtek driver with no success. I am not sure but MAYBE the problem did not occur on the original windows vista that was shipped with this computer. However, I removed the original windows for good reasons (english). What do you suggest me? Did anyone fix this problem? Maybe by writing a script which resets the audio streams every 5 minutes? Shall I take the effort to deal with the acer support until they give me another model? (I won't have a computer than for a longer time, will spend money on telephone hotlines (1,30 EUR / min)......) Here are additional infos, if they are any help: Windows 7 64 Bit (Original was Windows Vista Home Premium 32 Bit) All specs Audio driver version:

    Read the article

  • Supervisord doesn't stop nginx process

    - by Lennart Regebro
    I'm using Supervisor a lot, and in this project I have an nginx process managed by Supervisord. The relevant parts of the configuration is this: [supervisord] logfile=/home/projects/eceee-web/prod/var/log/supervisord.log logfile_maxbytes=5MB logfile_backups=10 loglevel=info pidfile=/home/projects/eceee-web/prod/var/supervisord.pid ; childlogdir=/home/projects/eceee-web/prod/var/log nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) directory=/home/projects/eceee-web/prod [program:nginx] command = /home/projects/eceee-web/prod/bin/nginx redirect_stderr = true autostart= true autorestart = true directory = /home/projects/eceee-web/prod stdout_logfile = /home/projects/eceee-web/prod/var/log/nginx-stdout.log stderr_logfile = /home/projects/eceee-web/prod/var/log/nginx-stderr.log The /home/projects/eceee-web/prod/bin/nginx command will start nginx in the foreground, it does not deamonify itself. Still, stopping it will fail: supervisorctl stop nginx Will not give any answer, but the process will continue. Any idea what? This is on OS X Darwin, with Supervisor 3.0a9 and nginx 0.7.65.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >