Search Results

Search found 3166 results on 127 pages for 'dr tom'.

Page 25/127 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Can you configure multiple KMS hosts in a primary / secondary relationship?

    - by Mark Hall
    We have two datacenters in our environment: primary and DR. I need to deploy a KMS service, and to be proactive, I would like to have a host in both datacenters. From what I have read, you can have up to 6 hosts without calling Microsoft, and it appears that what will happen is that a SRV record for each host will be placed in DNS. The client will query for those SRV records and randomly choose a host for the initial activation and will use that same server for all renewals. The server can be changed manually through a script and will automatically change if the initial server is unavailable when activating or renewing. My question is has anyone found a way to designate one server as the primary KMS host and designate the other as failover only? The reason I ask is that it is preferred that the client communicate with the primary datacenter during normal operations and only talk to the DR datacenter when needed because the bandwidth between the offices and the DR datacenter is limited compared to the primary. I am sure that this has been done before but I can not find it MSFT's documentation. Thanks, Mark

    Read the article

  • ESX 3.5 refuses to update

    - by Speeddymon
    I have a set of ESX 3.5 servers in 2 different datacenters. One is DR, one is production. They are on the same vlan and so I can access any of them on the private network from my vCenter server. Last month, as a learning experience (I hadn't dealt with ESX much before), I updated the DR server. Other than finding out that a couple of bundles had to be installed manually in order to get the rest to install from vCenter, it went off without a hitch. Now, I'm trying to do the same for our production servers and it is not working. I've googled around for the error I get during scan, and investigate loads of different solutions (editing the integrity file, checking DNS, etc) -- I did install the 2 bundles that had to be installed manually already -- but scan from vCenter is just not working. Side note: I did just scan the DR server again and that scan works fine so shouldn't be a problem with vCenter that has cropped up recently -- it has to be something else. The error I get is: Patch metadata for (servername) missing. Please download updates metadata first. Failed to scan (servername) for updates. I'm all out of ideas on how to make this work, so any help would be hugely appreciated.

    Read the article

  • Does subTable support reRender?

    - by Tom
    Here is a minimal rich:dataTable with an a4j:commandLink inside. When clicked it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a complete page refresh. Does subTable support reRender the way i'd like to use it here? Tanks Tom

    Read the article

  • Does a nested subTable break reRender?

    - by Tom
    Here is a minimal rich:dataTable with an a4j:commandLink inside. When clicked it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a complete page refresh. Why does subTable break support for reRender-ing the way i'd like to use it here? Tanks Tom

    Read the article

  • understanding logic of dijit css and styles

    - by Tom
    Hi, I am trying to use dijit.InlineEditBox. I have put the following code in my HTML, using the example in the dojo docs: <script type="text/javascript"> dojo.require("dijit.InlineEditBox"); dojo.require("dojo.parser"); dojo.require("dijit.form.TextBox"); function editableHeaderOnChange(id, arg){ alert("details changed with id " + id + " and arguments "+arg); } </script> ... <span id="myText" dojoType="dijit.InlineEditBox" onChange="editableHeaderOnChange(this.id,arguments[0])" autoSave="true" title="My Text">click to edit me</span> I am using tundra theme. It works, however it doesn't look so good. The widget has its own style, which doesn't fit my CSS. I used firebug to locate the source of the problem. The widget creates many nested div/span elements, each has it's own style (element style in firebug): <span id="dijit__InlineEditor_0" class="dijitReset dijitInline" style="margin: 0px; position: absolute; visibility: hidden; display: block; opacity: 0;" ...> <input type="text" autocomplete="off" class="dijit dijitReset dijitLeft dijitTextBox" id="dijit_form_TextBox_0" style="line-height: 20px; font-weight: 400; font-family: Trebuchet MS,Helvetica,Arial,Verdana; font-size: 14.5167px; font-style: normal; width: 100%;"> ...> </span></span> (showing only the relevant parts...) to get the visual that I want, which will not break to a newline, I need to change the width of dijit_form_TextBox_0** to 50%, and the positioning of dijit__InlineEditor_0 to display: inline**; or to change the positioning of everything (most of my layout is floated, so position: absolute doesn't fit) I cannot address those span elements in my css to change the properties, because the element.style has priority, of course. I don't understand the logic in this system... why is dijit generating the style directly inside the element? how can I change these properties? Thanks Tom

    Read the article

  • Why does a subTable break a4j:commandLink's reRender?

    - by Tom
    Here is a minimal rich:dataTable example with an a4j:commandLink inside. When clicked, it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable (grouping the cars by garage for example) to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a manual page refresh. Why does subTable break support for reRender-ing here? Tanks Tom

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

  • How to assign WPF resources to other resource tags

    - by Tom
    This is quite obscure, I may just be missing something extremely simple. Scenario 1 Lets say I create a gradient brush, like this in my <Window.Resources> section: <LinearGradientBrush x:Key="GridRowSelectedBackBrushGradient" StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="#404040" Offset="0.0" /> <GradientStop Color="#404040" Offset="0.5" /> <GradientStop Color="#000000" Offset="0.6" /> <GradientStop Color="#000000" Offset="1.0" /> </LinearGradientBrush> Then much later on, I want to override the HighlightBrushKey for a DataGrid. I have basically done it like this (horrible); <LinearGradientBrush x:Key="{x:Static SystemColors.HighlightBrushKey}" GradientStops="{Binding Source={StaticResource GridRowSelectedBackBrushGradient}, Path=GradientStops}" StartPoint="{Binding Source={StaticResource GridRowSelectedBackBrushGradient}, Path=StartPoint}" EndPoint="{Binding Source={StaticResource GridRowSelectedBackBrushGradient}, Path=EndPoint}" /> This is obviously not the most slick way of referencing a resource. I also came up with the following problem, which is almost identical. Scenario 2 Say I created two colors in my <Window.Resources> markup, like so: <SolidColorBrush x:Key="DataGridRowBackgroundBrush" Color="#EAF2FB" /> <SolidColorBrush x:Key="DataGridRowBackgroundAltBrush" Color="#FFFFFF" /> Then later on, I want to supply them in an Array, which feeds the ConverterParameter on a Binding so I can supply the custom Converter with my static resource instances: <Setter Property="Background"> <Setter.Value> <Binding RelativeSource="{RelativeSource Mode=Self}" Converter="{StaticResource BackgroundBrushConverter}"> <Binding.ConverterParameter> <x:Array Type="{x:Type Brush}"> <SolidColorBrush Color="{Binding Source={StaticResource DataGridRowBackgroundBrush}, Path=Color}" /> <SolidColorBrush Color="{Binding Source={StaticResource DataGridRowBackgroundAltBrush}, Path=Color}" /> </x:Array> </Binding.ConverterParameter> </Binding> </Setter.Value> </Setter> What I've done is attempt to rereference an existing resource, but in my efforts I've actually recreated the resource, and bound the properties so they match. Again, this is not ideal. Because I've now hit this problem at least twice, is there a better way? Thanks, Tom

    Read the article

  • SQL: How to select rows from a table while ignoring the duplicate field values?

    - by Maxxon
    How to select rows from a table while ignoring the duplicate field values? Here is an example: id user_id message 1 Adam "Adam is here." 2 Peter "Hi there this is Peter." 3 Peter "I am getting sick." 4 Josh "Oh, snap. I'm on a boat!" 5 Tom "This show is great." 6 Laura "Textmate rocks." What i want to achive is to select the recently active users from my db. Let's say i want to select the 5 recently active users. The problem is, that the following script selects Peter twice. mysql_query("SELECT * FROM messages ORDER BY id DESC LIMIT 5 "); What i want is to skip the row when it gets again to Peter, and select the next result, in our case Adam. So i don't want to show my visitors that the recently active users were Laura, Tom, Josh, Peter, and Peter again. That does not make any sense, instead i want to show them this way: Laura, Tom, Josh, Peter, (skipping Peter) and Adam. Is there an SQL command i can use for this problem?

    Read the article

  • asp.net gridview

    - by arjun
    I have a gridview which has bound fields and a template field for checkbox.I wrote a code for deletion of records as per checking checkboxes.My problem is HtmlInputCheckBox chk; foreach(GridViewRow dr in dgvdetails.Rows) { chk = (HtmlInputCheckBox)dr.FindControl("ch"); chk.Checked = true; if (chk.Checked)/// **here checkbox is not checked even if I'm check it** { pl.id = int.Parse(chk.Value); bl.deletedgvdetails(pl); } }

    Read the article

  • Global Thermonuclear War [closed]

    - by Vivin Paliath
    Hey there, I'm Dr. Falken and I'm trying to make a computer program on my computer (WOPR) that simulates Global Thermonuclear War. So far I've simulated Checkers and Tic-Tac-Toe, but I've never tried to do anything on this scale. Any pointers on how I should start? Sincerely, Dr. Falken

    Read the article

  • Using SDL Tridion 2011 Core Service to create Components programatically

    - by user1428019
    I have seen some of the questions/answers related to this topic here, however still I am not getting the suggestion which I want. So I am posting my question again here, and I would be thankful for your valuable time and answers. I would like to create “Component, Page, SG, Publication, Folders “ via programmatically in SDL Tridion Content Manager, and later on, I would like to add programmatically created components in Page and attach CT,PT for that page, and finally would like to publish the page programmatically. I have done these all the activities in SDL Tridion 2009 using TOM API (Interop DLL's), and I tried these activities in SDL Tridion 2011 using TOM.Net API. It was not working and later on I came to know that, TOM.Net API will not support these kinds of works and it is specifically only for Templates and Event System. And finally I came to know I have to go for Core services to do these kinds of stuffs. My Questions: When I create console application to create component programmatically using core service, what are the DLL’s I have to add as reference? Earlier, I have created the exe and ran in the TCM server, the exe created all the stuffs, can I used the same approach using core services too? Will it work? Is BC still available or Core Service replaced BC? (BC-Business Connector) Can anyone send some code snippet to create Component/Page (complete class file will be helpful to understand better) Thanks Jai

    Read the article

  • ASP.Net check value with DBNULL

    - by c11ada
    hey all, i have the following code foreach (DataRowView dr in Data) { if (dr == System.DBNull.Value) { nedID = 1; } } but i get the following error Operator '==' cannot be applied to operands of type 'System.Data.DataRowView' and 'System.DBNull' please can some one advice me on how i can check if the value is null or DBNULL

    Read the article

  • ASP.Net Error - Unable to cast object of type 'System.String' to type 'System.Data.DataTable'.

    - by xtrabits
    I get the below error Unable to cast object of type 'System.String' to type 'System.Data.DataTable'. This is the code I'm using Dim str As String = String.Empty If (Session("Brief") IsNot Nothing) Then Dim dt As DataTable = Session("Brief") If (dt.Rows.Count > 0) Then For Each dr As DataRow In dt.Rows If (str.Length > 0) Then str += "," str += dr("talentID").ToString() Next End If End If Return str Thanks

    Read the article

  • How can I solve out of memory exception in generic list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • Adding a GridViewRowCollection to the asp.net GridView

    - by CoffeeCode
    i have a record of a gridviewrowcollection. and i'm having issues with adding them to the grid. GridViewRowCollection dr = new GridViewRowCollection(list); StatisticsGrid.DataSource = dr; doesnt work. StatisticsGrid.Rows does have an add method, what is strange how can i add a gridviewrowcollection without creating a datatable + binding it to the datasource?? thanks in advance

    Read the article

  • How can i solve out of Exception error in list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • Merge Mutliple Excel Workbooks

    - by IRHM
    I wonder whether someone may be able to help me please. I'm trying to use the code below to allow the user to select multiple Excel Workbooks, amalgamating the data into one 'Summary' sheet. Sub Merge() Dim DestWB As Workbook, WB As Workbook, WS As Worksheet, SourceSheet As String Set DestWB = ActiveWorkbook SourceSheet = "Input" startrow = 7 FileNames = Application.GetOpenFilename( _ filefilter:="Excel Files (*.xls*),*.xls*", _ Title:="Select the workbooks to merge.", MultiSelect:=True) If IsArray(FileNames) = False Then If FileNames = False Then Exit Sub End If End If For n = LBound(FileNames) To UBound(FileNames) Set WB = Workbooks.Open(Filename:=FileNames(n), ReadOnly:=True) For Each WS In WB.Worksheets If WS.Name = SourceSheet Then With WS If .UsedRange.Cells.Count > 1 Then dr = DestWB.Worksheets("Input").Range("C" & Rows.Count).End(xlUp).Row + 1 lastrow = .Range("C" & Rows.Count).End(xlUp).Row For j = lastrow To startrow Step -1 Select Case .Range("E" & j).Value Case "Manager", "Lead", "Technical", "Analyst" 'do nothing Case Else .Rows(j).EntireRow.Delete End Select Next lastrow = .Range("C" & Rows.Count).End(xlUp).Row If lastrow >= startrow Then .Range("B" & startrow & ":AD" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "B").PasteSpecial xlValues .Range("AF" & startrow & ":AQ" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "AF").PasteSpecial xlValues .Range("AS" & startrow & ":AS" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "AS").PasteSpecial xlValues End If End If End With Exit For End If Next WS WB.Close savechanges:=False Next n End Sub The code works fine except for one issue which I've been trying to solve for the last few weeks. The following line of code looks in column E of the Source file, and if any of the entries match the values shown in the code it copies that row of data to paste into the Destination file. If Range("E" & j) <> "Manager" And Range("E" & j) <> "Lead" And Range("E" & j) <> "Technical" And Range("E" & j) <> "Analyst" Then Rows(j).Delete The problem I have is that if none of these values are found in the Source file, I receive the following error: Run time error '1004': Delete method of range class failed and in Debug mode it highlights this part of the line as the source of the error, but I've no idea why. Rows(j).Delete I just wondered whether someone may be able to look at this please and let me know where I'm going wrong, or perhaps even suggest a more efficient process of allowing the user to merge the workbooks. Many thanks and kind regards

    Read the article

  • C# UDP decoding datagrams fails randomly

    - by Tom Frey
    Hi, I'm experiencing an issue in a multi threaded application and have been debugging it for the last 3 days but for the life of it can not figure it out. I'm writing this, hoping that I either have a DUH moment when typing this or somebody sees something obvious in the code snippets I provide. Here's what's going on: I've been working on a new UDP networking library and have a data producer that multicasts UDP datagrams to several receiver applications. The sender sends on two different sockets that are bound to separate UDP multicast addresses and separate ports. The receiver application also creates two sockets and binds each one to one of the sender's multicast address/port. When the receiver receives the datagram, it copies it from the the buffer in a MemoryStream which is then put onto a thread safe queue, where another thread reads from it and decodes the data out of the MemoryStream. Both sockets have their own queues. What happens now is really weird, it happens randomly, non-reproducible and when I run multiple receiver applications, it only happens randomly on one of them every now and then. Basically, the thread that reads the MemoryStream out of the queue, reads it via a BinaryReader like ReadInt32(), etc. and thereby decodes the data. Every now and then however when it reads the data, the data it reads from it is incorrect, e.g. a negative integer number which the sender never would encode. However, as mentioned before, the decoding only fails in one of the receiver applications, in the other ones the datagram decodes fine. Now you might be saying, well, probably the UDP datagram has a byte corruption or something but I've logged every single datagram that's coming in and compared them on all receivers and the datagrams every application receives are absolutely identical. Now it gets even weirder, when I dump the datagram that failed to decode to disk and write a unit test that reads it and runs it through the decoder, it decodes just fine. Also when I wrap a try/catch around the decoder, reset the MemoryStream position in the catch and run it through the decoder again, it decodes just fine. To make it even weirder, this also only happens when I bind both sockets to read data from the sender, if I only bind one, it doesn't happen or at least I wasn't able to reproduce it. Here are is some corresponding code to what's going on: This is the receive callback for the socket: private void ReceiveCompleted(object sender, SocketAsyncEventArgs args) { if (args.SocketError != SocketError.Success) { InternalShutdown(args.SocketError); return; } if (args.BytesTransferred > SequencedUnitHeader.UNIT_HEADER_SIZE) { DataChunk chunk = new DataChunk(args.BytesTransferred); Buffer.BlockCopy(args.Buffer, 0, chunk.Buffer, 0, args.BytesTransferred); chunk.MemoryStream = new MemoryStream(chunk.Buffer); chunk.BinaryReader = new BinaryReader(chunk.MemoryStream); chunk.SequencedUnitHeader.SequenceID = chunk.BinaryReader.ReadUInt32(); chunk.SequencedUnitHeader.Count = chunk.BinaryReader.ReadByte(); if (prevSequenceID + 1 != chunk.SequencedUnitHeader.SequenceID) { log.Error("UdpDatagramGap\tName:{0}\tExpected:{1}\tReceived:{2}", unitName, prevSequenceID + 1, chunk.SequencedUnitHeader.SequenceID); } else if (chunk.SequencedUnitHeader.SequenceID < prevSequenceID) { log.Error("UdpOutOfSequence\tName:{0}\tExpected:{1}\tReceived:{2}", unitName, prevSequenceID + 1, chunk.SequencedUnitHeader.SequenceID); } prevSequenceID = chunk.SequencedUnitHeader.SequenceID; messagePump.Produce(chunk); } else UdpStatistics.FramesRxDiscarded++; Socket.InvokeAsyncMethod(Socket.ReceiveAsync, ReceiveCompleted, asyncReceiveArgs); } Here's some stub code that decodes the data: public static void OnDataChunk(DataChunk dataChunk) { try { for (int i = 0; i < dataChunk.SequencedUnitHeader.Count; i++) { int val = dataChunk.BinaryReader.ReadInt32(); if(val < 0) throw new Exception("EncodingException"); // do something with that value } } catch (Exception ex) { writer.WriteLine("ID:" + dataChunk.SequencedUnitHeader.SequenceID + " Count:" + dataChunk.SequencedUnitHeader.Count + " " + BitConverter.ToString(dataChunk.Buffer, 0, dataChunk.Size)); writer.Flush(); log.ErrorException("OnDataChunk", ex); log.Info("RETRY FRAME:{0} Data:{1}", dataChunk.SequencedUnitHeader.SequenceID, BitConverter.ToString(dataChunk.Buffer, 0, dataChunk.Size)); dataChunk.MemoryStream.Position = 0; dataChunk.SequencedUnitHeader.SequenceID = dataChunk.BinaryReader.ReadUInt32(); dataChunk.SequencedUnitHeader.Count = dataChunk.BinaryReader.ReadByte(); OnDataChunk(dataChunk); } } You see in the catch{} part I simply reset the MemoryStream.Position to 0 and call the same method again and it works just fine that next time? I'm really out of ideas at this point and unfortunately had no DUH moment writing this. Anybody have any kind of idea what might be going on or what else I could do to troubleshoot this? Thanks, Tom

    Read the article

  • Good Book for Learning Meteor: Discover Meteor

    - by Stephen.Walther
    A week or so ago, Sacha Greif asked me whether I would be willing to write a review of his new book on Meteor (published today) entitled Discover Meteor. Sacha wrote the book with Tom Coleman. Both Sacha and Tom are very active in the Meteor community – they are responsible for several well-known Meteor packages and projects including Atmosphere, Meteorite, meteor-router and Telescope — so I suspected that their book would be good. If you have not heard of Meteor, Meteor is a new framework for building web applications which is built on top of Node.js. Meteor excels at building a new category of constantly-connected, real-time web applications. It has some jaw-dropping features which I described in a previous blog entry: http://stephenwalther.com/archive/2013/03/18/an-introduction-to-meteor.aspx So, I am super excited about Meteor. Unfortunately, because it is evolving so quickly, learning how to write Meteor applications can be challenging. The official documentation at Meteor.com is good, but it is too basic. I’m happy to report that Discovering Meteor is a really good book: · The book is a fun read. The writing is smooth and I read through the book from cover to cover in a single Saturday afternoon with pleasure. · The book is well organized. It contains a walk-through of building a social media app (Microscope). Interleaved through the app building chapters, it contains tutorial chapters on Meteor features such as deployment and reactivity. · The book covers several advanced topics which I have not seen covered anywhere else. The chapters on publications and subscriptions, routing, and animation are especially good. I came away from the book with a deeper understanding of all of these topics. I wish that I had read Discover Meteor a couple of months ago and it would have saved me several weeks of reading Stack Overflow posts and struggling with the Meteor documentation If you want to buy Discover Meteor, the authors gave me the following link which provides you with a 20% discount: http://discovermeteor.com/orionids

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >