Search Results

Search found 4846 results on 194 pages for 'vertical sync'.

Page 38/194 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Can I speed up File Sync for Ubuntu One?

    - by tapan
    I am using ubuntu 11.04 on my home laptop and the same on my work laptop. I just wanted to sync the work folders on my home laptop to my office laptop which is ~300Mb of data. This would normally be a short download but ubuntu-one is taking forever to sync it. Any ideas what could cause this? I am not behind any firewall or anything of that sort. I have not checked the limit bandwidth box in the preferences.

    Read the article

  • Android Shopping App And SQLite

    - by Melih Mucuk
    I want to develop shopping app for android and it should works like tinypay.me Users can sign up and they sell items or buy items immediately. I need to database. I want to use SQLite but I don't find any answer to these questions: SQLite database files are stored on the user's phone, isn't it ? If it's true, here is the scenario: User logged in, then he/she wants to sells something on app. He/she creates ad and he/she published it on the app. These variables writing their database. When other users logged in, will they see this ad? If it's possible, how can sync database between users ? Thanks for attention.

    Read the article

  • Cloud sync between iPad/iPhone app

    - by Macatomy
    I have a Core Data app that will end up being an iPhone/iPad universal application. I would like to implement cloud syncing so that an iPhone and an iPad both running the app could share data. I'm planning to use the recently released Dropbox API. Does anyone have any thoughts on the best way to go about doing this? The Dropbox API allows for apps to store files on the cloud. What I was thinking was to original store the database (sqlite) for the app on the cloud and then download that database, but I then realized that using that method would make it painfully difficult to merge changes (rather than replacing the whole database). Any thoughts are appreciated. Thanks.

    Read the article

  • Git doesn't sync files until committed, even if checked out in a different branch

    - by DertWaiter
    Okay, I have git 1.7.11.1 on Windows and I have a local test repository with 2 branches. One is master with index.php and help.php. I then create another branch called slave :) I run from git bash rm help.php and it disappears from the folder, but I don't stage anything. I switch to checkout master branch and it is supposed to restore file help.php because it is not modified in the master branch, isn't it? And it does not do it. When I go back to the slave branch and commit and then switch to checkout master then help.php appears. Is that the way it is supposed to to work? Why?

    Read the article

  • Sync offline JQuery Mobile form

    - by Dimas
    My name is Dimas and I work as a developer in a hospital. I've developed a web form with JQuery Mobile which is accessed from an Android tablet. It sends data (text and FILES) to the server (PHP) and it's saved in a MySQL database. This tablets are used by doctors that are visiting the patients in their homes. Some of these patients live in remote areas with very bad Internet connection, so I'm thinking on use HTML5 Offline feature to solve these situations. I started reading about it but I'm newbie with this technology. I've some questions: Do you think this is the best/easiest solution? If I could save the form data in a local sqlite storage when the tablet is offline, how I could synchronize it with the data in MySQL server when the tablet become online? Thanks

    Read the article

  • Sencha 2 : Sync models with hasMany associations in LocalStorage

    - by Alytrem
    After hours and hours trying to do this, I need your help. I have to models : Project and Task. A project hasMany tasks and a task belong to a project. Everyting works well if you don't use a store to save these models. I want to save both tasks and projects in two stores (TaskStore and ProjectStore). These stores use a LocalStorage proxy. I tried many things, and the most logical is : Ext.define('MyApp.model.Task', { extend: 'Ext.data.Model', config: { fields: [ { name: 'name', type: 'string' }, { dateFormat: 'd/m/Y g:i', name: 'start', type: 'date' }, { dateFormat: 'd/m/Y g:i', name: 'end', type: 'date' }, { name: 'status', type: 'string' } ], belongsTo: { model: 'MyApp.model.Project' } } }); Ext.define('MyApp.model.Project', { extend: 'Ext.data.Model', alias: 'model.Project', config: { hasMany: { associationKey: 'tasks', model: 'MyApp.model.Task', autoLoad: true, foreignKey: 'project_id', name: 'tasks', store: {storeId: "TaskStore"} }, fields: [ { name: 'name', type: 'string' }, { dateFormat: 'd/m/Y', name: 'start', type: 'date' }, { dateFormat: 'd/m/Y', name: 'end', type: 'date' } ] } }); This is my "main" : var project = Ext.create("MyApp.model.Project", {name: "mojo", start: "17/03/2011", end: "17/03/2012", status: "termine"}); var task = Ext.create("MyApp.model.Task", {name: "todo", start: "17/03/2011 10:00", end: "17/03/2012 19:00", status: "termine"}); project.tasks().add(task); Ext.getStore("ProjectStore").add(project); The project is added to the store, but task is not. Why ?!

    Read the article

  • index seems to be out of sync for jquery .keydown and .click methods

    - by Growler
    I'd like to navigate images using both keyboard and mouse (clicking left and right arrow images). I'm using Jquery to do this, but the shared imgIndex seems to be off from the .keydown function and the .click function... Whenever .keydown function -- or ++ the imgIndex, isn't that changed index also used in the click function? So shouldn't they always be on the same index? keydown function: <script type="text/javascript"> var imgArray = [<?php echo implode(',', getImages($site)) ?>]; $(document).ready(function() { var img = document.getElementById("showimg"); img.src = imgArray[<?php echo $imgid ?>]; var imgIndex = <?php echo $imgid ?>; alert(imgIndex); $(document).keydown(function (e) { var key = e.which; var rightarrow = 39; var leftarrow = 37; var random = 82; if (key == rightarrow) { imgIndex++; if (imgIndex > imgArray.length-1) { imgIndex = 0; } img.src = imgArray[imgIndex]; } if (key == leftarrow) { if (imgIndex == 0) { imgIndex = imgArray.length; } img.src = imgArray[--imgIndex]; } }); click function: Connected to left and right clickable images $("#next").click(function() { imgIndex++; if (imgIndex > imgArray.length-1) { imgIndex = 0; } img.src = imgArray[imgIndex]; }); $("#prev").click(function() { if (imgIndex == 0) { imgIndex = imgArray.length; } img.src = imgArray[--imgIndex]; }); }); Just so you have some visibility into the getImages php function: <?php function getImages($siteParam) { include 'dbconnect.php'; if ($siteParam == 'artwork') { $table = "artwork"; } else { $table = "comics"; } $catResult = $mysqli->query("SELECT id, title, path, thumb, views, catidFK FROM $table"); $img = array(); while($row = $catResult->fetch_assoc()) { $img[] = "'" . $row['path'] . "'"; } return $img; } ?> Much appreciated! Snapshot of where the script is on "view image.php"

    Read the article

  • Keeping track of dirty blocks on a block device

    - by mikeY
    I'm looking for a way to keep track of what blocks on a block device are modified after a point in time. How I eventually want to use this for is to keep two 2TB disks in sync, one which only comes online (connected through USB) once a month. Without knowing what blocks have been modified, I have to go through the whole 2TB every time. I'm using a recent GNU/Linux OS and have C and Python experience. I'm hoping to avoid writing kernel level code as I don't have any experience in that area whatsoever. My current theory is that there should be some hooks somewhere where my code can get called when a disk flush is performed. Any ideas?

    Read the article

  • Best way to fix an out-of-sync TFS workspace after a back-up/restore

    - by DanO
    Wednesday I had to restore from a back-up image I made on Monday. At the time of the snapshot I had about 20 files modified, which I later checked in, and more, on Tuesday. Now that I am back to a snapshot from Monday morning, my workspace has all of these files checked-out or added, etc. even my check-in comments and work-item associations. But I already did that check-in on Tuesday. I'm thinking I will shelve all the pending changes (just in-case), and then just undo all changes, and get latest (specific version). And I should be back to good. Any cautions or suggestions? (TFS 2008, VS2010)

    Read the article

  • rsync verify a file already exists in dest folder so it will skip the copy on the 1st sync

    - by joel_gil
    I have been looking at different tutorials about rsync about some specific situation I have. I have a home server with all my pics, this server is my backup, my PC is the one that receives the new pics and until now i had been manually copying and pasting new photos from the PC to the server. I was trying to setup rsync to do this automatically and in principle, it does without problem. Now the issue; when I fire up rsync it start copying all the files, even the ones were already in the destination (this is because it is the 1st sync). so my question is: Is it possible for rsync to verify that a file is the same (name/size/bin) so it will skip the copy on the 1st sync?

    Read the article

  • Not seeing Sync Block in Object Layout

    - by bob-bedell
    It's my understanding the all .NET object instances begin with an 8 byte 'object header': a synch block (4 byte pointer into a SynchTableEntry table), and a type handle (4 byte pointer into the types method table). I'm not seeing this in VS 2010 RC's (CLR 4.0) debugger memory windows. Here's a simple class that will generate a 16 byte instance, less the object header. class Program { short myInt = 2; // 4 bytes long myLong = 3; // 8 bytes string myString = "aString"; // 4 byte object reference // 16 byte instance static void Main(string[] args) { new Program(); return; } } An SOS object dump tells me that the total object size is 24 bytes. That makes sense. My 16 byte instance plus an 8 byte object header. !DumpObj 0205b660 Name: Offset_Test.Program MethodTable: 000d383c EEClass: 000d13f8 Size: 24(0x18) bytes File: C:\Users\Bob\Desktop\Offset_Test\Offset_Test\bin\Debug\Offset_Test.exe Fields: MT Field Offset Type VT Attr Value Name 632020fc 4000001 10 System.Int16 1 instance 2 myInt 632050d8 4000002 4 System.Int64 1 instance 3 myLong 631fd2b8 4000003 c System.String 0 instance 0205b678 myString Here's the raw memory: 0x0205B660 000d383c 00000003 00000000 0205b678 00000002 ... And here are some annotations: offset 0 000d383c ;TypeHandle (pointer to MethodTable), 4 bytes offset 4 00000003 00000000 ;myLong, 8 bytes offset 12 0205b678 ;myString, 4 byte reference to address of "myString" on GC Heap offset 16 00000002 ;myInt, 4 bytes My object begins a address 0x0205B660. But I can only account for 20 bytes of it, the type handle and the instance fields. There is no sign of a synch block pointer. The object size is reported as 24 bytes, but the debugger is showing that it only occupies 20 bytes of memory. I'm reading Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects, and expected the first 4 bytes of my object to be a zeroed synch block pointer, as shown in Figure 8 of that article. Granted, this is an article about CLR 1.1. I'm just wondering if the difference between what I'm seeing and what this early article reports is a change in either the debugger's display of object layout, or in the way the CLR lays out objects in versions later than 1.1. Anyway, can anyone account for my 4 missing bytes?

    Read the article

  • outlook calendar connectivity with java web service

    - by Isisagate
    We currently have a java/jsp online web service that includes it's own custom calendar. I am trying to do some research into the possibility of connecting it to a users outlook. Our basic needs that are most simple is some way to sent the person a meeting request that can be added to their outlook from our service. I know the ideal solution is to sync back and forth but simply being able to import the data from our calendar into someone's outlook would be sufficient. Does anyone have any resources they can point me to that might help with information gathering, or any example/comments?

    Read the article

  • sync android application with website?

    - by Pranav
    //https://play.google.com/store/apps/details?id=in.co.discoverit.my_FlashCards here i launched the first version of my flash card application I am working on Flash Card application. Here i used sqlite Db to store my cards and data. Know i want to synchronize my database with website database..... So how would i do this for my application??? Please any one tell me how should i start doing this and also tell me the possible ways to do this on both device side and website side.... Its urgent for my application. Can any one help me out.... Regard, Pranav

    Read the article

  • How do I sync two local file structures.

    - by James
    Hi, I have two large source trees. One of them has some out of date image files. I would like to automatically update all the old image files (png, jpg, gif) in one source tree with the up to date image files in the other source tree. I am using Windows 7 but I have Cygwin installed. I have tried using rsync so far but with no success. I was hoping I could do something like: rsync -r *.png newSourceTree oldSourceTree Any help would be much appreciated.

    Read the article

  • Sync issue in collection fetch backbone

    - by Stefano Maglione
    i'm fetching a collection but i've problem because into the collection the function parse use an another ajax call.So if i try to console.log the response of fetch after the fetch linecode but it is ever undefined. Function call fetch: friends: function(){ var amici=new Amicizie(); var amicilist=amici.fetch(); console.log(amicilist);<---undefined,because executed before fetch collection called: var obj={}; var Amicizie = Backbone.Collection.extend({ url:'https://api.parse.com/1/classes/User/', parse: function(data) { var cur_user=Parse.User.current().id; $.ajax({ type: 'GET', headers: {'X-Parse-Application-Id':'qS0KL***EM1tyhM9EEPiTS3VMk','X-Parse-REST-API- Key':'nh3eoUo9G***JIfIt1Gm'}, url: "https://api.parse.com/1/classes/_User/?where=....", success: function(object) { console.log(object ); obj=object; console.log(obj ); }, error: function(data) { console.log("ko" ); } }); return obj.results; } }); return Amicizie; });

    Read the article

  • Need to sync two lists with atrribute time, but times aren't equal

    - by virgula24
    I gonna try to describe my problem the best i can. I have two lists, one with audio frames and other with color frames (not relevant). Both of them have timestamps, they were captured at the same moment but at different instants. So, i have like this: index COLOR AUDIO 0 841 846 1 873 897 2 905 948 3 940 1000 the frames start at high numbers because they were captured and then trimmed to specific parts, but im shot, frame 0 is synced with only 5ms apart(timestamp in ms). On every case i have, the audio frames count is less than the color count. I need to make them have the same count. The stating frames may be coloraudio, color

    Read the article

  • Windows 7 taskbar groups to use vertical menus only, not thumbnails?

    - by Justin Grant
    I frequently have 10 or more windows of the same application (e.g. Outlook, Word, IE, etc.) open at one time. Windows 7's new taskbar grouping thumbnail feature shows a preview of open windows (aka thumbnails) when you single-click on the taskbar group for that application. But when I have over 10 windows open (more thumbnails than will fit horizontally on my screen), Windows reverts to a vertical menu. This is disconcerting since I need to train myself to deal with two separate behaviors and can never anticipate what I'm going to see when I click on a taskbar group. Furthermore, I find the thumbnails more difficult to visually scan through (vs. the vertical menu) because only 1-2 words of each window's title are shown. Typically that's not enough text to disambiguate and help me find the right window. I'd like to force Windows 7's taskbar grouping to always show a vertical menu (like in XP) instead of sometimes showing thumbnail previews and sometimes showing a vertical menu. Anyone know how (whether?) this can be done? UPDATE: BTW, I'm running the RTM version (build 7600) of Windows 7. There are apparently other solutions out there which work on earlier builds, but which don't work on the RTM build.

    Read the article

  • Hyper-V Machine drifts time all over, even with NTP

    - by MichaelGG
    Resolved The problem was Hyper-V on that machine. I removed Hyper-V, installed VMware Server, ran the same VM. Time sync issues went away (< 100ms difference after a day). My setup is like this: HYV1 - HyperV machine (non domain) - sync irrelevant AD1 - VM AD server on HYV1, sync'd to time.nist.gov. HyperV time sync off. S1 - Physical machine, sync'd to domain. S2 - Physical machine running HyperV, sync'd to domain. V1 - Linux VM machine on S2, sync'd to AD1. No HyperV integration. AD1 and S1 have fine sync -- stripchart shows less than 100ms difference. S2 drifts like crazy. Here's a bit of the stripchart against AD1: 18:33:22 d:+00.0010138s o:+05.4101899s 18:33:24 d:+00.0010138s o:+05.4319765s 18:33:26 d:+00.0000000s o:+05.4788429s 18:33:28 d:+00.0000000s o:+05.6089942s 18:33:30 d:+00.0010138s o:+05.7240269s 18:33:32 d:+00.0000000s o:+06.0421911s 18:33:34 d:+00.0081104s o:+06.5613708s 18:33:37 d:+00.0000000s o:+06.9096594s 18:33:39 d:+00.0000000s o:+06.8867838s 18:33:41 d:+00.0010127s o:+06.8936401s In 20 seconds, it drifted over a second. If I manually reset it to within 1s, within a few minutes it'll be back drifting about 2 seconds. Overnight it went from ~2s to ~5s. The Linux VM inside S2 has perfect sync with AD1. Here's the config: C:\Users\mgg>w32tm /dumpreg /subkey:Parameters Value Name Value Type Value Data ------------------------------------------------------------ ServiceDll REG_EXPAND_SZ %systemroot%\system32\w32time.dll ServiceMain REG_SZ SvchostEntry_W32Time ServiceDllUnloadOnStop REG_DWORD 1 Type REG_SZ NT5DS NtpServer REG_SZ ad01.mydomain ad02.mydomain C:\Users\mgg>w32tm /dumpreg /subkey:Config Value Name Value Type Value Data ----------------------------------------------------------- FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 LocalClockDispersion REG_DWORD 9 HoldPeriod REG_DWORD 5 PhaseCorrectRate REG_DWORD 1 UpdateInterval REG_DWORD 30000 EventLogFlags REG_DWORD 2 AnnounceFlags REG_DWORD 5 TimeJumpAuditOffset REG_DWORD 28800 MinPollInterval REG_DWORD 2 MaxPollInterval REG_DWORD 8 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 MaxAllowedPhaseOffset REG_DWORD 300 I looked at the event log, and apart from warnings about sync (after it gets way out of sync), there's no other warnings. How can I go about troubleshooting this? It's the only machine that is having this problem. All the other machines (physical and virtual) are doing fine. Edit: To clarify: The VM (AD1) has integration turned off and syncs to time.nist.gov. AD1 is fine. It's the physical machine S1 that can't sync to AD1 and drifts all over. All the other physical servers are able to sync to AD1 just fine. Update So, it appears to be an issue of running the VM. The clock slips slowly with the VM off. Turned on, it immediately starts losing seconds. I swt the VM to only use half the resources, and that seems to have slightly mitigated it, for now. Thanks!

    Read the article

  • Degraded RAID5 and no md superblock on one of remaining drive

    - by ark1214
    This is actually on a QNAP TS-509 NAS. The RAID is basically a Linux RAID. The NAS was configured with RAID 5 with 5 drives (/md0 with /dev/sd[abcde]3). At some point, /dev/sde failed and drive was replaced. While rebuilding (and not completed), the NAS rebooted itself and /dev/sdc dropped out of the array. Now the array can't start because essentially 2 drives have dropped out. I disconnected /dev/sde and hoped that /md0 can resume in degraded mode, but no luck.. Further investigation shows that /dev/sdc3 has no md superblock. The data should be good since the array was unable to assemble after /dev/sdc dropped off. All the searches I done showed how to reassemble the array assuming 1 bad drive. But I think I just need to restore the superblock on /dev/sdc3 and that should bring the array up to a degraded mode which will allow me to backup data and then proceed with rebuilding with adding /dev/sde. Any help would be greatly appreciated. mdstat does not show /dev/md0 # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md5 : active raid1 sdd2[2](S) sdc2[3](S) sdb2[1] sda2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdd4[3] sdc4[2] sdb4[1] sda4[0] 458880 blocks [5/4] [UUUU_] bitmap: 40/57 pages [160KB], 4KB chunk md9 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0] 530048 blocks [5/4] [UUUU_] bitmap: 33/65 pages [132KB], 4KB chunk mdadm show /dev/md0 is still there # mdadm --examine --scan ARRAY /dev/md9 level=raid1 num-devices=5 UUID=271bf0f7:faf1f2c2:967631a4:3c0fa888 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=0d75de26:0759d153:5524b8ea:86a3ee0d spares=2 ARRAY /dev/md0 level=raid5 num-devices=5 UUID=ce3e369b:4ff9ddd2:3639798a:e3889841 ARRAY /dev/md13 level=raid1 num-devices=5 UUID=7384c159:ea48a152:a1cdc8f2:c8d79a9c With /dev/sde removed, here is the mdadm examine output showing sdc3 has no md superblock # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff0e - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 3 0 active sync /dev/sda3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdb3 /dev/sdb3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff20 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 19 1 active sync /dev/sdb3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed [~] # mdadm --examine /dev/sdc3 mdadm: No md superblock detected on /dev/sdc3. [~] # mdadm --examine /dev/sdd3 /dev/sdd3: Magic : a92b4efc Version : 00.90.00 UUID : ce3e369b:4ff9ddd2:3639798a:e3889841 Creation Time : Sat Dec 8 15:01:19 2012 Raid Level : raid5 Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB) Array Size : 5854278400 (5583.08 GiB 5994.78 GB) Raid Devices : 5 Total Devices : 4 Preferred Minor : 0 Update Time : Sat Dec 8 15:06:17 2012 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Checksum : d9e9ff44 - correct Events : 0.394 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 51 3 active sync /dev/sdd3 0 0 8 3 0 active sync /dev/sda3 1 1 8 19 1 active sync /dev/sdb3 2 2 8 35 2 active sync /dev/sdc3 3 3 8 51 3 active sync /dev/sdd3 4 4 0 0 4 faulty removed fdisk output shows /dev/sdc3 partition is still there. [~] # fdisk -l Disk /dev/sdx: 128 MB, 128057344 bytes 8 heads, 32 sectors/track, 977 cylinders Units = cylinders of 256 * 512 = 131072 bytes Device Boot Start End Blocks Id System /dev/sdx1 1 8 1008 83 Linux /dev/sdx2 9 440 55296 83 Linux /dev/sdx3 441 872 55296 83 Linux /dev/sdx4 873 977 13440 5 Extended /dev/sdx5 873 913 5232 83 Linux /dev/sdx6 914 977 8176 83 Linux Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 66 530113+ 83 Linux /dev/sda2 67 132 530145 82 Linux swap / Solaris /dev/sda3 133 182338 1463569695 83 Linux /dev/sda4 182339 182400 498015 83 Linux Disk /dev/sda4: 469 MB, 469893120 bytes 2 heads, 4 sectors/track, 114720 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/sda4 doesn't contain a valid partition table Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 66 530113+ 83 Linux /dev/sdb2 67 132 530145 82 Linux swap / Solaris /dev/sdb3 133 182338 1463569695 83 Linux /dev/sdb4 182339 182400 498015 83 Linux Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 66 530125 83 Linux /dev/sdc2 67 132 530142 83 Linux /dev/sdc3 133 182338 1463569693 83 Linux /dev/sdc4 182339 182400 498012 83 Linux Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 66 530125 83 Linux /dev/sdd2 67 132 530142 83 Linux /dev/sdd3 133 243138 1951945693 83 Linux /dev/sdd4 243139 243200 498012 83 Linux Disk /dev/md9: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md9 doesn't contain a valid partition table Disk /dev/md5: 542 MB, 542769152 bytes 2 heads, 4 sectors/track, 132512 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md5 doesn't contain a valid partition table

    Read the article

  • JEditorPane scrolling to the current caret position

    - by Elliott
    I have a JEditorPane which I use to display an HTML document. the document has hyperlinks embedded in it. When a user clicks on a bookmark a position the caret to the associated place in the JeditorPane. The JeditorPane is then suppose to scroll to this position. This works mostly. But, I noticed that if the document has a lot of "break tags" (BR) tags embedded in it, the scrolling does not position the JEditorPane to right place. It's like the tags throw the callebration off. Any suggestions on what to do about this?

    Read the article

  • ASP.NET AJAX UpdatePanel problem

    - by Velika
    I'll try and be concise: 1) I have a dropdownlist with Autopostback set to TRUE 2) I have an UpdatePanel that contains a Label. 3) When the downdownlist selection is changed, I want to update the label. Problem: Focus is lost on the dropdownlist, forcing the user to click on the dropdownlist to reset focus back to the control. My "solution": In the DropDownList_SelectionChanged event, set focus back to the drop down list: dropdownlist1.focus() Problem: While this works great in IE, Firefox and Chrome change the scroll position such that the control which was assigned focus is positioned at the bottom on the visible portion of the browser window. This is often a very disorientating side effect. How can this be avoided so it works in FF as it does in IE?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >