Search Results

Search found 9952 results on 399 pages for 'big al'.

Page 74/399 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • [help]Website Process Code

    - by user1915555
    i am new to here...i want a big help...help me...:d look at this website link and go through the procces and download the pdf or text file... http://www.doyourownwill.com/do-your-will-online.html and tell me how to do that plz....this is a big help to me..plz help me..:d when you are going through the proces u can see the link like this... http://www.doyourownwill.com/do-your-will/case-2.html?step=2 tell me how to do that...plz before give the answers watch the full process of that site... I am waiting for great reply...:d

    Read the article

  • querySelectorAll is not finding dynamically added elements with custom attribute

    - by Exception
    I am creating two way binding between JS Object and UI in below fiddle, code is big to post Please check http://jsfiddle.net/bpH6Z/20/ I am using the code like below var elements = document.querySelectorAll("[" + data_attr + "] *[bd='" + prop_name + "']"); I have mentioned the problem line in big comments, can be identified easily. My problem is I am adding elements to binded dynamicaly using JS, when I change the value in UI, the same value is not reflected in other places. The problem is querySelectorAll is faling to find elements with same attribute. It is finding only first occurrence. Please look into the issue.

    Read the article

  • Set required attribute of two h:selectManyCheckbox

    - by BRabbit27
    I have two h:selectManyCheckBox with the required attribute set to true. What I want is that the required attribute of both of the components work together. Only display the error message if and only if both of the selected items list are empty. Right now my problem is that the message displays if either one of them is empty. Here's my code: <rich:panel> <f:facet name="header"> <h:outputText value="Actualización de catálogos"/> </f:facet> <h:panelGrid columns="4"> <h:outputLabel for="actualizarCatalogoPEC" value="Actualizar catálogos PEC"/> <h:selectBooleanCheckbox id="actualizarCatalogoPEC" value="#{administrationBean.actualizaTodosPecChecked}"> <f:ajax event="click" render="todosCatalogosPEC"/> </h:selectBooleanCheckbox> <h:outputLabel for="actualizarCatalogoSAGARPA" value="Actualizar catálogos SAGARPA"/> <h:selectBooleanCheckbox id="actualizarCatalogoSAGARPA" value="#{administrationBean.actualizaTodosSagarpaChecked}"> <f:ajax event="click" render="todosCatalogosSAGARPA"/> </h:selectBooleanCheckbox> <a4j:outputPanel id="todosCatalogosPEC"> <h:selectManyCheckbox id="selectCatalogosPEC" disabled="#{administrationBean.actualizaTodosPecChecked}" required="true" value="#{administrationBean.catalogosPecSeleccionados}" requiredMessage="Seleccione al menos un catálogo" layout="pageDirection"> <f:selectItems value="#{administrationBean.catalogosPecOptions}"/> </h:selectManyCheckbox> </a4j:outputPanel> <h:panelGroup/> <a4j:outputPanel id="todosCatalogosSAGARPA"> <h:selectManyCheckbox id="selectCatalogosSAGARPA" disabled="#{administrationBean.actualizaTodosSagarpaChecked}" required="true" value="#{administrationBean.catalogosSagarpaSeleccionados}" requiredMessage="Seleccione al menos un catálogo" layout="pageDirection" > <f:selectItems value="#{administrationBean.catalogosSagarpaOptions}"/> </h:selectManyCheckbox> </a4j:outputPanel> <h:panelGroup/> <rich:message id="messageCatalogosPEC" for="selectCatalogosPEC"/> <h:panelGroup/> <rich:message id="messageCatalogosSAGARPA" for="selectCatalogosSAGARPA"/> <h:panelGroup/> <a4j:commandButton value="Actualizar catálogos" render="messageCatalogosPEC" action="#{administrationBean.doActualizaCatalogos}"/> </h:panelGrid> </rich:panel> Cheers

    Read the article

  • How to compile using gcc but without using _alloca ?

    - by shkim
    For some reason, I should use gcc to compile a C file, then link against Visual C++ 2008 project. (I used the current latest gcc version: cygwin gcc 4.3.4 20090804.) But there is one problem: gcc always allocate a big array with _alloca, and VC linker can't resolve the symbol __alloca. for example, int func() { int big[10240]; .... } this code makes the _alloca dependency although I didn't call the _alloca function explicitly. (array size matters. if i change 10240 - 128, everything ok) I tried gcc option -fno-builtin-alloca or -fno-builtin, but no luck. Is it possible to make gcc not to use _alloca ? (or adjust the threshold?)

    Read the article

  • Creating Tests at Runtime

    - by James Thigpen
    Are there any .NET testing frameworks which allow dynamic creation of tests without having to deal with a hokey Attribute syntax? Something like: foreach (var t in tests) { TestFx.Run(t.Name, t.TestDelegate); } But with the test reporting as you would expect... I could do something like this with RowTests et al, but that seems hokey.

    Read the article

  • What are the difference between Cygwin on windows and real UNIX environment

    - by Tarun
    Hi, I am a C/C++ developer. I have never done C++ programming on UNIX, I have done only on windows. I want to practice C++ on Unix. (Because all big companies ask C++ with Unix). I have a laptop on which i do not want to install any other OS (because i have installed very important software on it and i don't have setups) So, I searched and found CygWin which is Unix emulator for Windows. I am thinking to practice C++ on this. Please help me, how can I practice/learn in more close to the environment(Unix Environment) that is used in Big companies like IBM. What will be the difference between Unix and Cygwin?

    Read the article

  • Improve disk read performance (multiple files) with threading

    - by pablo
    I need to find a method to read a big number of small files (about 300k files) as fast as possible. Reading them sequentially using FileStream and reading the entire file in a single call takes between 170 and 208 seconds (you know, you re-run, disk cache plays its role and time varies). Then I tried using PInvoke with CreateFile/ReadFile and using FILE_FLAG_SEQUENTIAL_SCAN, but I didn't appreciate any changes. I tried with several threads (divide the big set in chunks and have every thread reading its part) and this way I was able to improve speed just a little bit (not even a 5% with every new thread up to 4). Any ideas on how to find the most effective way to do this?

    Read the article

  • How can I implement a tail-recursive list append?

    - by martingw
    A simple append function like this (in F#): let rec app s t = match s with | [] -> t | (x::ss) -> x :: (app ss t) will crash when s becomes big, since the function is not tail recursive. I noticed that F#'s standard append function does not crash with big lists, so it must be implemented differently. So I wondered: How does a tail recursive definition of append look like? I came up with something like this: let rec comb s t = match s with | [] -> t | (x::ss) -> comb ss (x::t) let app2 s t = comb (List.rev s) t which works, but looks rather odd. Is there a more elegant definition?

    Read the article

  • Understanding NoSQL Data Modeling - blog application

    - by Rushabh RajeshKumar Padalia
    I am creating an blogging application in Node.js + MongoDB Database. I have used relational Database like MySQL before but this is my first experience with NoSQL database. So I would like to conform my MongoDB data models before I move further. I have decided my blogDB to have 3 collections post_collection - stores information about that article comment_collection - store information about comments on articles user_info_collection - contains user inforamtion PostDB { _"id" : ObjectID(...), "author": "author_name", "Date": new Date(....), "tag" : ["politics" , "war"], "post_title": "My first Article", "post_content": "Big big article" "likes": 23 "access": "public" } CommentDB { "_id" : Objectid(...), "POST": "My First Article", "comment_by": "User_name", "comment": "MY comments" } UserInfoDB { "_id": ObjectID(...), "user": "User_name", "password": "My_password" } I would appreciate your comments.

    Read the article

  • How do I serve a large file using Pylons?

    - by Chris R
    I am writing a Pylons-based download gateway. The gateway's client will address files by ID: /file_gw/download/1 Internally, the file itself is accessed via HTTP from an internal file server: http://internal-srv/path/to/file_1.content The files may be quite large, so I want to stream the content. I store metadata about the file in a StoredFile model object: class StoredFile(Base): id = Column(Integer, primary_key=True) name = Column(String) size = Column(Integer) content_type = Column(String) url = Column(String) Given this, what's the best (ie: most architecturally-sound, performant, et al) way to write my file_gw controller?

    Read the article

  • using a tileset with canvas

    - by Anonymous
    Yeah so I'm lost from the get-go. Alright let's say I have a big image with every tile for a 2D top-down RPG game. They're all the same width and everything. What I don't know is how would I save every individual tile from that image to their own image data for use on the canvas? Basically I want to take a big image with all my tiles, choose squares throughout it to make images out of the tiles, and store each image as a variable in an array. So, how would I do this?

    Read the article

  • Accessing hard-coded data in a C# application.

    - by haymansfield
    I'm trying to avaid hardcoding in a .net 2.0 soon to be 3.5 application. I have a large enumneration which I wish to map 1 to 1 to a set of strings. Each enumerated value will also map to 1 of 2 values indicating an action. The existing code does this with a big switch statement but this seems ugly to me. Is there a better way of storing and accessing the data? I've thought about resx files but when you consider that the designer file contains just as many hardcoded values it seems a little pointless. Is embedding an xml file in the assembly a good idea? Is a big switch statement not as bad as it seems? Is there a better solution?

    Read the article

  • Upsides of a timebox for a customer

    - by Ivo
    So I have a customer with a potential big project that (ofcourse) does not know what they want exactly. The size of this project can be more that 4 or 5 months so that is a big risk. Thats why I want to sell a timebox. For me that takes away the risk of spending 10 months instead of 5 for the same price. The problem is that I can't comeup with good arguments to convince the customer that a timebox is better for them. Any suggestions? How do you people handle this/

    Read the article

  • Drupal vs FatWire - Any thoughts?

    - by RadiantHex
    Hi folks, a company I am working for is considering the usage of a CMS, apparently two of the suggested CMSs are Drupal and FatWire. FatWire is proprietary and quite expensive, therefore it seems that there is a not so big community build around the product. Functionality seems to be extensive, even though a few design choices seem counter-intuitive and long-winded. Drupal instead is open source and has an big community backing the product. There are plenty of books around and usage seems more intuitive. Functionality wise I am unsure on how they compare. The main features that the company's team seem to like are team workflow features and revision control (present in FatWire, even though the implementation seems quite limited). Hopefully some of you guys have been faced with these two products before, and might have a few suggestions up their sleeve. Help would be much appreciated!

    Read the article

  • jquery tree selection

    - by Qiao
    I have cats with tree hierarchy, for example, country-city. So that you should first choose country, and then city. Or big catalog for products. You should choose several "folders", to get for specific product. Yahoo's answers have this: And some business catalogs sites with big products lists. I have all cats in php and can pass them to javascript. How can I Implement it on one page? Is there any jquery plugin for this?

    Read the article

  • What's the easiest way to create an extensible custom container in Flex?

    - by Chris R
    I want to create an MXML container component that has some of its own chrome -- a standard query display, et al -- and that supports the addition of child components to it. Something a lot like the existing mx:Panel class, which includes a title label, but acts like a plain mx:Box with regards to adding children. What's the easiest way to do this? Edit: To be clear, I want to be able to extend the container using MXML, so the "Multiple visual children" problem is relevant.

    Read the article

  • Can JQuery/JavaScript be used to write a substantial client side application?

    - by Ian
    I have an unusual situation - I have an embedded video streaming device with a complicated UI, and I need to use an embedded web server to reproduce that UI through a web browser. I'm thinking of using JavaScript/JQuery on a C++ backend (I am NOT coding all this myself, I need to hire people for the grunt work). The embedded web server is much less powerful than a PC, so I want to write an application that runs the entire UI in the browser, and only communicates with the server to pass new program settings back and forth, get status updates from the device, and control video playback. In other words, the client gets one big page or a small number of big pages (effectively downloading the application), the application maintains significant local memory storage, and once the pages are first loaded the server never sends anything layout-related. The application has two rows of tabs to navigate ~40 menu pages, drag-and-select controls to pick cells in a grid, sorted lists, lots of standard data entry options, and it should be able to control up to 16 embedded video players at once (preferably VLC). Is this possible in JavaScript/JQuery with a C++ backend?

    Read the article

  • how can i query a table that got split to 2 smaller tables? Union? view ?

    - by danfromisrael
    hello friends, I have a very big table (nearly 2,000,000 records) that got split to 2 smaller tables. one table contains only records from last week and the other contains all the rest (which is a lot...) now i got some Stored Procedures / Functions that used to query the big table before it got split. i still need them to query the union of both tables, however it seems that creating a View which uses the union statement between the two tables lasts forever... that's my view: CREATE VIEW `united_tables_view` AS select * from table1 union select * from table2; and then i'd like to switch everywhere the Stored procedure select from 'oldBigTable' to select from 'united_tables_view'... i've tried adding indexes to make the time shorter but nothing helps... any Ideas? PS the view and union are my idea but any other creative idea would be perfect! bring it on! thanks!

    Read the article

  • How to define large list of strings in Visual Basic

    - by Jenny_Winters
    I'm writing a macro in Visual Basic for PowerPoint 2010. I'd like to initialize a really big list of strings like: big_ol_array = Array( _ "string1", _ "string2", _ "string3", _ "string4" , _ ..... "string9999" _ ) ...but I get the "Too many line continuations" error in the editor. When I try to just initialize the big array with no line breaks, the VB editor can't handle such a long line (1000+) characters. Does anyone know a good way to initialize a huge list of strings in VB? Thanks in advance!

    Read the article

  • Yet another (13)Permission denied error on Apache2 server

    - by lollercoaster
    I just can't figure it out. I'm running apache2 on a Ubuntu 10.04 i386 server. Whenever I visit my server (has an IP address, and is connected to internet with static IP xxx.xxx.xxx.xxx) so that's not the problem) in browser, mysub.domain.edu (renamed here), I get the following: Forbidden You don't have permission to access /index.html on this server The apache2 error log confirms this: [Mon Apr 18 02:38:20 2011] [error] [client zzz.zzz.zzz.zzz] (13)Permission denied: access to / denied I'll try to provide all necessary information below: 1) Contents of /etc/apache2/httpd.conf DirectoryIndex index.html index.php 2) Contents of /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /home/myusername/htdocs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/home/myusername/htdocs/"> Options Indexes FollowSymLinks MultiViews AllowOverride None order allow,deny allow from all DirectoryIndex index.html index.php Satisfy any </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> ServerName mysub.domain.edu </VirtualHost> 3) Contents of /etc/apache2/sites-enabled/000-default <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /home/myusername/htdocs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/home/myusername/htdocs/"> Options Indexes FollowSymLinks MultiViews AllowOverride None order allow,deny allow from all DirectoryIndex index.html index.php Satisfy any </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> ServerName mysub.domain.edu </VirtualHost> 4) Result of ls -l (when I'm using sudo -i to be root): root@myserver:/home/myusername# ls -l total 4 drwxr-xr-x 2 www-data root 4096 2011-04-18 03:04 htdocs 5) ps auxwww | grep -i apache root@myserver:/home# ps auxwww | grep -i apache root 15121 0.0 0.4 5408 2544 ? Ss 16:55 0:00 /usr/sbin/apache2 -k start www-data 15122 0.0 0.3 5180 1760 ? S 16:55 0:00 /usr/sbin/apache2 -k start www-data 15123 0.0 0.5 227020 2788 ? Sl 16:55 0:00 /usr/sbin/apache2 -k start www-data 15124 0.0 0.5 227020 2864 ? Sl 16:55 0:00 /usr/sbin/apache2 -k start root 29133 0.0 0.1 3320 680 pts/0 R+ 16:58 0:00 grep --color=auto -i apache 6) ls -al /home/myusername/htdocs/ root@myserver:/# ls -al /home/myusername/htdocs/ total 20 drwxr-xr-x 2 www-data root 4096 2011-04-18 03:04 . drw-r--r-- 4 myusername myusername 4096 2011-04-18 02:13 .. -rw-r--r-- 1 root root 69 2011-04-18 02:14 index.html I'm not currently using any .htaccess files in my web root (htdocs) folder in my user folder. I don't know what is wrong, I've been trying to fix his for over 12 hours and I've gotten nowhere. If you have any suggestions, I'm all ears...

    Read the article

  • 2 drives, slow software RAID1 (md)

    - by bart613
    Hello, I've got a server from hetzner.de (EQ4) with 2* SAMSUNG HD753LJ drives (750G 32MB cache). OS is CentOS 5 (x86_64). Drives are combined together into two RAID1 partitions: /dev/md0 which is 512MB big and has only /boot partitions /dev/md1 which is over 700GB big and is one big LVM which hosts other partitions Now, I've been running some benchmarks and it seems like even though exactly the same drives, speed differs a bit on each of them. # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 25612 MB in 1.99 seconds = 12860.70 MB/sec Timing buffered disk reads: 352 MB in 3.01 seconds = 116.80 MB/sec # hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 25524 MB in 1.99 seconds = 12815.99 MB/sec Timing buffered disk reads: 342 MB in 3.01 seconds = 113.64 MB/sec Also, when I run eg. pgbench which is stressing IO quite heavily, I can see following from iostat output: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sdb 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 md1 0.00 0.00 0.00 529.60 0.00 9692.80 18.30 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 529.00 0.00 9688.00 18.31 24.51 49.91 1.81 95.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sdb 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 md1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 30.19 56.92 2.05 99.04 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sdb 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 md1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 28.77 62.15 1.96 99.00 And this is completely getting me confused. How come two exactly the same specced drives have such a difference in write speed (see util%)? I haven't really paid attention to those speeds before, so perhaps that something normal -- if someone could confirm I would be really grateful. Otherwise, if someone have seen such behavior again or knows what is causing such behavior I would really appreciate answer. I'll also add that both "smartctl -a" and "hdparm -I" output are exactly the same and are not indicating any hardware problems. The slower drive was changed already two times (to new ones). Also I asked to change the drives with places, and then sda were slower and sdb quicker (so the slow one was the same drive). SATA cables were changed two times already.

    Read the article

  • DRBD not syncing between my nodes when IP is reset

    - by ramdaz
    I am trying to setup DRBD by following the article at http://www.howtoforge.com/setting-up-network-raid1-with-drbd-on-ubuntu-11.10-p2 I am using Ubuntu 10.04 DRBD - 8.3.11 In the first run I had everything working perfectly and when shifting the systems to a production environment I decided to restart the Meta Data creation part and start from scratch. The IPs had changed entirely in the production environment. Issuing drdbadm create-md r0 in both the servers runs successfully. But when I do "drbdadm -- --overwrite-data-of-peer primary all" on the primary it fails to start the re sync. My config file is as given below resource r0 { protocol C; syncer { rate 50M; } startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "aklsadkjlhdbskjndsf8738734jkfkjfkjf"; } on primaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.1:7788; meta-disk internal; } on secondaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.3:7788; meta-disk internal; } } Status on primary root at primaryds:~# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at primaryds, 2012-05-12 15:08:01 0: cs:WFBitMapS ro:Primary/Secondary ds:UpToDate/Inconsistent C r---- ns:0 nr:0 dw:0 dr:200 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Status on secondary root at secondaryds:/etc/drbd.d# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at secondaryds, 2012-05-12 15:25:25 0: cs:WFBitMapT ro:Secondary/Primary ds:Inconsistent/UpToDate C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Log of Primary May 30 13:42:23 primaryds kernel: [ 1584.057076] block drbd0: role( Secondary -> Primary ) disk( Inconsistent -> UpToDate ) May 30 13:42:23 primaryds kernel: [ 1584.086264] block drbd0: Forced to consider local data as UpToDate! May 30 13:42:23 primaryds kernel: [ 1584.086303] block drbd0: Creating new current UUID May 30 13:42:26 primaryds kernel: [ 1586.405551] block drbd0: drbd_sync_handshake: May 30 13:42:26 primaryds kernel: [ 1586.405564] block drbd0: self E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405574] block drbd0: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405582] block drbd0: uuid_compare()=2 by rule 30 May 30 13:42:26 primaryds kernel: [ 1586.405587] block drbd0: Becoming sync source due to disk states. May 30 13:42:26 primaryds kernel: [ 1586.405592] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:27 primaryds kernel: [ 1588.171638] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:27 primaryds kernel: [ 1588.172769] block drbd0: conn( Connected -> WFBitMapS ) Log in Secondary May 30 13:42:24 secondaryds kernel: [ 1563.304894] block drbd0: peer( Secondary - Primary ) pdsk( Inconsistent - UpToDate ) May 30 13:42:24 secondaryds kernel: [ 1563.339674] block drbd0: drbd_sync_handshake: May 30 13:42:24 secondaryds kernel: [ 1563.339685] block drbd0: self 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339695] block drbd0: peer E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339703] block drbd0: uuid_compare()=-2 by rule 20 May 30 13:42:24 secondaryds kernel: [ 1563.339709] block drbd0: Becoming sync target due to disk states. May 30 13:42:24 secondaryds kernel: [ 1563.339714] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:26 secondaryds kernel: [ 1565.652342] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:26 secondaryds kernel: [ 1565.652965] block drbd0: conn( Connected - WFBitMapT ) The serves are not responding once it reaches this stage. Tried redoing it couple of time but noting happens. Why could the resync not be taking place? I would like some advice? Directions?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >