Search Results

Search found 8075 results on 323 pages for 'report builder'.

Page 103/323 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Cannot upgrade system because of this error

    - by user292375
    Setting up mongodb-org-server (2.6.1) ... * Starting database mongod [fail] invoke-rc.d: initscript mongod, action "start" failed. dpkg: error processing mongodb-org-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mongodb-org: mongodb-org depends on mongodb-org-server; however: Package mongodb-org-server is not configured yet. dpkg: error processing mongodb-org (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mongodb-org-server mongodb-org E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Upgrade Ubuntu 10.04 to Ubuntu 10.10

    - by user8561
    Hi. I'm new to these forums so I'll be quick. When I try to upgrade to Ubuntu 10.10 from 10.04 I get this error, I have tried upgrading from Terminal and Update Manager as well. Could not determine the upgrade An unresolvable problem occurred while calculating the upgrade: E:Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Thanks Aborting

    Read the article

  • skype crashes, also audio problems 13.10

    - by user139710
    Ive always had problems with skype ususally with audio, trying to keep the usb headphones set correctly. Wha tive found is that when i see the Pulseaudio server in the audio list in Skype, and thats my only option, Skype works good,...stable and reliable. Second issue: Its been crashing a bit, just locks up and the only thing to free it up is a reboot,.. These problems are under the new release 13.10. so concider this a bug report. DEVELOPERS: You all need to make a good channel for people to report bugs,.... OK thats all/

    Read the article

  • ??????????? Oracle OpenWorld Tokyo 2012 ?????? ~?????????????~ ????????!

    - by M.Morozumi
    Oracle OpenWorld Tokyo 2012?JavaOne Tokyo 2012 ???????????????????????????Oracle Open World ?????????????????2????????? Exadata ???????·DBA·????????? Exadata?????????(???)???Oracle Exadata Database Machine ??? Exadata Storage Server X2-2 ??????????Exadata ???????????????????????????????????Database Machine?????????????????????????????????????????????????????????? ?????????(???) 2012?4?2?~4?3? Oracle BI Suite EE 10g??????! 11g????????????????????!! Oracle BIEE 10g???? 11g Report/Dashboard ?????Oracle BI Suite EE 10g?????????11g???(????)????????????????????????? Oracle BIEE 10g???? 11g Report/Dashboard ?? 2012?4?3?~4?4?

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • June is going to be a busy month!

    - by Monica Kumar
    Who says things slow down in summer? Well, maybe for school kids, but certainly not for Oracle's Virtualization team! June is turning out to be one of the busiest months for us. We are going to be participating in a number of industry events. If you happen to be at any of these, please stop by the Oracle booth and our session/s. Let's go through a run down of these events. 1. 13th Annual Call Center Week June 4-8 Ceasar's Palace, Las Vegas  Event website You're now wondering...why are we at this call center show. It's really simple, Oracle's Desktop Virtualization solutions offer the best way for call center to reliably and securely access enterprise apps using a variety of endpoint devices such as an iPad or a Sun Ray Client. Provisioning new employees becomes a breeze. We'll be jointly showcasing our solution with Oracle's CRM team. Come check us out.  2. Gartner Infrastructure & Management, Florida June 5-7 Orlando, FL  Event website Oracle is a Premier sponsor of the Gartner IOM Summit this June 5 – 7, 2012 in Orlando, FL.  Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. 3. Cloud Expo East Check out our website for details of our participation. Stop by at booth 511 to talk to our Cloud, Virtualization and Big Data experts. In addition, we're delivering a number of sessions at Cloud Expo. The one I want to highlight is the following: Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder Abstract: As virtualization adoption progresses beyond server consolidation, this is also transforming how enterprise applications are deployed and managed in an agile environment. The traditional method of business-critical application deployment where administrators have to contend with an array of unrelated tools, custom scripts to deploy and manage applications, OS and VM instances into a fast changing cloud computing environment can no longer scale effectively to achieve response time and desired efficiency. Oracle VM and Oracle Virtual Assembly Builder allow applications, associated components, deployment metadata, management policies and best practices to be encapsulated into ready-to-run VMs for rapid, repeatable deployment and ease of management. Join us in this Cloud Expo session to see how Oracle VM and Oracle Virtual Assembly Builder allow you to deploy complex multi-tier applications in minutes and enables you to easily onboard existing applications to cloud environments.  Get your free Cloud Expo pass now!  We're offering complimentary VIP Gold Passes. Go to https://www.blueskyz.com/v3/Login.aspx?ClientID=19&EventID=56&sg=177, click “Continue” if you are a New User or log-in if you have already created an account. Once there, you can view the Agenda or Register for Cloud Expo. To register - fill out the basic business card questions and then enter oracleVIPgold in the Priority Code field to change the price from $2,000 to $0. 4. CiscoLive 2012  June 10-14 San Diego, CA Event website Our Oracle VM and Oracle Linux experts will talk about joint collaboration with Cisco on UCS. We'll also highlight customer use cases. 5. Gartner Infrastructure & Operations Management Summit, EMEA Dates: June 11-12 Frankfurt, Germany Event website Meet experts from our Virtualization and Linux team in EMEA. Stop by our booth and find out what's new in Oracle VM Server for x86 and Oracle Linux. June is going to be busy.

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • Notes - Part II - Play with JavaFX

    - by Silviu Turuga
    Open the project from last lesson Double click on NotesUI.fmxl, this will open the JavaFX Scene Builder On the left side you have a area called Hierarchy, from there press Del or Shift+Backspace on Mac to delete the Button and the Label. You'll receive a warning, that some components have been assigned an fx:id, click Delete as we don't need them anymore. Resize the AnchorPane to have enough room for our design, eg. 820x550px From the top left pick the Container called Accordion and drag over the AnchorPane design Chose then from Controls a List View and drag inside the Accordion. You'll notice that by default the Accordion has 2 TitledPane, and you can switch between them by clicking on their name. I'll let you the pleasure to do the rest in order to get the following result  Here is the list of objects used Save it and then return to NetBeans Run the application and it should be run without any issue. If you click on buttons they all are functional, but nothing happens as we didn't link them with any action. We'll see this in the next episode. Now, let's play a little bit with the application and try to resize it… Have you notice the behavior? If the form is too small, some objects aren't visible, if it is too large there is too much space . That's for sure something that your users won't like and you as a programmer have to care about this. From NetBeans double click NotesUI.fmxl so to return back to JavaFX Scene Builder Select the TextField from bottom left of Notes, the one where I put the text Category and then from the right part of JavaFX Scene Builder you'll notice a panel called Inspector. Chose Layout and then click on the dotted lines from left and bottom of the square, like you see in the below image This will make the textfield to have always the same distance from left and bottom no matter the size of the form. Save and run the application. Note that whenever the form is changing the Height, the Category TextField has the same distance from the bottom. Select Accordion and do the same steps but also check the top dotted line, because we want the Accordion to have the same height as the main form has. I'll let you the pleasure to do the same for the rest of components. It's very important to design an application that can be resize by user and in the same time, all the buttons are on place. Last step is to make sure our application is not getting smaller then a certain size, as this will hide parts of our layout. So select the AnchorPane and from Inspector go to Layout and note down the Width and Height. Go back to NetBeans and open the file Main.java and add the following code just after stage.setScene(scene); (around line 26) stage.setMinWidth(820); stage.setMinHeight(550); Use your own width and height. This will prevent user to reduce the width or height of your application to a value that will hide parts of your layout. So now you should have done most of the design part and next time we'll see how can we enter some data into our newly created application… Note: in case you miss something, here are the source files of the project till this point. 

    Read the article

  • SQL 2008 SP2 RsClientPrint ActiveX - "Unable to load client print control"

    - by Miles
    We recently updated our SQL 2008 server to use SP 2 and its causing a few headaches. We use SSRS on this server and when a client tries to print a report by the built-in print function, we're needing to download the RsClientPrint ActiveX control from the server from the client gets the following error Unable to load client print control. We have about 700 computers that are needing this fixed and I've followed the instructions found on the following URL: http://www.kodyaz.com/articles/client-side-printing-silent-deployment-of-rsclientPrint.aspx We have two issues: Most of the users who will be using this ActiveX control are not local administrators so they will not be able to install the control themselves Since there are so many computers, this has to be done silently behind the scenes run by a local admin account After following the information from the link above, we're able to put the files in the C:\Windows\System32 folder and register the DLL but we still get the same problem. The only small thing I've noticed is that in the HTML for the report page, everything that references a version is referencing version 2007.100.4000.00 and the version of the DLL that I pulled from the report server is 2007.100.1600.22. Also, for some clients that are local administrators, they are prompted every time to install the ActiveX control when they click print. This works successfully but we can't have the user asked if they want to install the same control every time they need to print.

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • different nmap results

    - by aasasas
    Hello I have a scan on my server form outside and from inside, why results are different? [root@xxx ~]# nmap -sV -p 0-65535 localhost Starting Nmap 5.51 ( http://nmap.org ) at 2011-02-16 07:59 MSK Nmap scan report for localhost (127.0.0.1) Host is up (0.000015s latency). rDNS record for 127.0.0.1: localhost.localdomain Not shown: 65534 closed ports PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 4.3 (protocol 2.0) 80/tcp open http Apache httpd 2.2.3 ((CentOS)) Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 7.99 seconds AND sh-3.2# nmap -sV -p 0-65535 xxx.com Starting Nmap 5.51 ( http://nmap.org ) at 2011-02-16 00:01 EST Warning: Unable to open interface vmnet1 -- skipping it. Warning: Unable to open interface vmnet8 -- skipping it. Stats: 0:07:49 elapsed; 0 hosts completed (1 up), 1 undergoing SYN Stealth Scan SYN Stealth Scan Timing: About 36.92% done; ETC: 00:22 (0:13:21 remaining) Stats: 0:22:05 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan Service scan Timing: About 75.00% done; ETC: 00:23 (0:00:02 remaining) Nmap scan report for xxx.com (x.x.x.x) Host is up (0.22s latency). Not shown: 65528 closed ports PORT STATE SERVICE VERSION 21/tcp open tcpwrapped 22/tcp open ssh OpenSSH 4.3 (protocol 2.0) 25/tcp open tcpwrapped 80/tcp open http Apache httpd 2.2.3 ((CentOS)) 110/tcp open tcpwrapped 143/tcp open tcpwrapped 443/tcp open tcpwrapped 8080/tcp open http-proxy?

    Read the article

  • Problems with image/file upload in MediaWiki on Windows 2008 Server R2, using wrong temp directory

    - by Lasse V. Karlsen
    I have installed MediaWiki 1.15.2 under IIS as per the MediaWiki installation instructions for Windows 2008 Server. I have configured PHP to use a specific temp directory: upload_tmp_dir="C:\php\uploadtemp" I have specified that MediaWiki is allowed to upload: $wgEnableUploads = true; But when I try to upload an image, I get this error message in my browser: Internal error Could not find file "C:\Windows\Temp\php1AEA.tmp". Retrying will simply give me a new filename, but in the same location. The directory does not have any php* files in it, but since they're "temporary", they might be gone in a flash before Windows Explorer is able to show them so that might be a red herring. I've googled for this, and the most promising lead I found was on this page: Image upload problem - Is this bug fixed?, but since the text says "a bugfix was posted on the bug-report page", but provides no link to which bug page this relates to (php or mediawiki) nor the actual bug report, I've not found conclusively the bug report in question so that didn't help me much. Lots of pages indicates that this is a permission issue, so I tried setting permissions on c:\windows\temp as Modify by Everyone, still no dice. I tried changing the two system environment variables TEMP and TMP to point to C:\Temp instead, but MediaWiki still complains about not finding the file in C:\Windows\Temp. Note that I don't care a lot about where the files will actually be stored temporarily, so c:\windows\temp is fine by me. I do, however, care about them actually being uploaded correctly. Does anyone know of a fix, have any leads I can follow, or whatnot? The server is running Windows 2008 Server R2, all patches installed, and the PHP installed is 5.3.2, using IIS FastCGI.

    Read the article

  • Jobs with anacron won't run

    - by mareser
    I would like to run two bash scripts daily using anacron in order to backup some data. Unfortunately I can't figure out why said scripts are not executed. For test purposes I let cron execute the scripts and it worked fine. cat /etc/anacrontab gives # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # These replace cron's entries 1 5 cron.daily nice run-parts --report /etc/cron.daily 7 10 cron.weekly nice run-parts --report /etc/cron.weekly @monthly 15 cron.monthly nice run-parts --report /etc/cron.monthly 1 5 TB_bak /bin/sh /home/vasco2/Dropbox/Scripts/backup_TB.sh 1 5 key_db_bak /bin/sh /home/vasco2/Dropbox/Scripts/bak_key_db.sh The output of ls ~/Dropbox/Scripts/ is backup_TB.sh bak_key_db.sh I use Linux Mint Katya. uname -a gives Linux vasco2 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11 05:17:09 UTC 2011 i686 i686 i386 GNU/Linux I would be very happy if somebody could point me in the right direction on why those scripts won't get executed. P.S.: There is no anacron tag on superuser.com. Maybe somebody wants to change that.

    Read the article

  • Trouble getting started with the STEALTH monitoring package

    - by dlanced
    Is anyone here familiar with the Linux-based STEALTH package (for monitoring FS integrity of client systems)? I'm trying to get started with a very simple configuration, but I'm running into trouble (this is running under Ubuntu 14.04): Config line `USE BASE/root/stealth/10.0.0.79' invalid STEALTH (2.11.02) started at Fri, 30 May 2014 15:25:00 +0000 Program terminated due to non-zero exit value for -type f -exec /usr/bin/sha1sum {} \; (EOC Fri May 30 15:25:00 2014 127) Stealth is creating a binary tmp file in the Stealth server root and generating a "report" file in the start directory, but not much else. Regarding the "USE BASE...invalid" error, and just to be sure, I manually created the directories in /root, but it didn't help. And, by the way, I am running stealth with sudo. Everything seems to be configured correctly: I'm able to ssh into root@client from the stealth machine without a password Here's my "policy" file (I've removed the email directives just for simplicity): DEFINE SSHCMD /usr/bin/ssh [email protected] -T -q exec /bin/bash --noprofile DEFINE EXECSHA1 -xdev -perm +u+s,g+s ( -user root -or -group root ) \ -type f -exec /usr/bin/sha1sum {} \; USE BASE/root/stealth/10.0.0.79 USE SSH ${SSHCMD} USE DD /bin/dd USE DIFF /usr/bin/diff USE PIDFILE /var/run/stealth- USE REPORT report USE SH /bin/sh GET /usr/bin/sha1sum /root/tmp LABEL \nchecking the client's /usr/bin/find program CHECK LOG = remote/binfind /usr/bin/sha1sum /usr/bin/find LABEL \nsuid/sgid/executable files uid or gid root on the / partition CHECK LOG = remote/setuidgid /usr/bin/find / ${EXECSHA1} LABEL \nconfiguration files under /etc CHECK LOG = remote/etcfiles \ /usr/bin/find /etc -type f -not -perm /6111 \ -not -regex "/etc/(adjtime\|mtab)"\ -exec /usr/bin/sha1sum {} \; Any ideas? Thanks,

    Read the article

  • Discrepancy in file size on disk and ls output

    - by smokinguns
    I have a script that checks for gzipped file sizes greater than 1MB and outputs files along with their sizes as a report. This is the code: myReport=`ls -ltrh "$somePath" | egrep '\.gz$' | awk '{print $9,"=>",$5}'` # Count files that exceed 1MB oversizeFiles=`find "$somePath" -maxdepth 1 -size +1M -iname "*.gz" -print0 | xargs -0 ls -lh | wc -l` if [ $oversizeFiles -eq 0 ];then status="PASS" else status="CHECK FAILED. FOUND FILES GREATER THAN 1MB" fi echo -e $status"\n"$myReport The problem is that ls command outputs the files sizes as 1.0MB in the report but the status is "FAIL" as "$oversizeFiles" variable's value is 2. I checked the file sizes on disk and 2 files are 1.1MB. Why this discrepancy? How should I modify the script so that I can generate an accurate report? BTW, I'm on a Mac. Here is what man page for "find" says on my Mac OSX: -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c,then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes)

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • SQLiteQueryBuilder.buildQuery not using selectArgs?

    - by user297468
    Alright, I'm trying to query a sqlite database. I was trying to be good and use the query method of SQLiteDatabase and pass in the values in the selectArgs parameter to ensure everything got properly escaped, but it wouldn't work. I never got any rows returned (no errors, either). I started getting curious about the SQL that this generated so I did some more poking around and found SQLiteQueryBuilder (and apparently Stack Overflow doesn't handle links with parentheses in them well, so I can't link to the anchor for the buildQuery method), which I assume uses the same logic to generate the SQL statement. I did this: SQLiteQueryBuilder builder = new SQLiteQueryBuilder(); builder.setTables(BarcodeDb.Barcodes.TABLE_NAME); String sql = builder.buildQuery(new String[] { BarcodeDb.Barcodes.ID, BarcodeDb.Barcodes.TIMESTAMP, BarcodeDb.Barcodes.TYPE, BarcodeDb.Barcodes.VALUE }, "? = '?' AND ? = '?'", new String[] { BarcodeDb.Barcodes.VALUE, barcode.getValue(), BarcodeDb.Barcodes.TYPE, barcode.getType()}, null, null, null, null); Log.d(tag, "Query is: " + sql); The SQL that gets logged at this point is: SELECT _id, timestamp, type, value FROM barcodes WHERE (? = '?' AND ? = '?') However, here's what the documentation for SQLiteQueryBuilder.buildQuery says about the selectAgs parameter: You may include ?s in selection, which will be replaced by the values from selectionArgs, in order that they appear in the selection. ...but it isn't working. Any ideas?

    Read the article

  • Hibernate JPA 2.0 CriteriaQuery, subset listing and counting at once

    - by Jeroen
    Hello, I recently started using the new Hibernate (EntityManager) 3.5.1 with JPA 2.0, and I was wondering if it was possible to both retrieve a (sub-set) of entities and their count from a single CriteriaQuery instance. My current implementation looks as follows: class HibernateResult<T> extends AbstractResult<T> { /** * Construct a new {@link HibernateResult}. * @param criteriaQuery the criteria query * @param selector the selector that determines the entities to return */ HibernateResult(CriteriaQuery<T> criteriaQuery, Selector selector, EntityManager entityManager) { CriteriaBuilder builder = entityManager.getCriteriaBuilder(); // Count the entities CriteriaQuery<Long> countQuery = builder.createQuery(Long.class); Root<T> path = criteriaQuery.from(criteriaQuery.getResultType()); countQuery.select(builder.count(path)); final int count = entityManager.createQuery(countQuery).getSingleResult().intValue(); this.setCount(count); // List the entities according to selector TypedQuery<T> entityQuery = entityManager.createQuery(criteriaQuery); entityQuery.setFirstResult(selector.getFirstResult()); entityQuery.setMaxResults(selector.getMaxRecords()); List<T> entities = entityQuery.getResultList(); this.setEntities(entities); } } The thing is that I want to count all entities that match my criteria query, but the count method from CriteriaBuilder only seems to take Expression as argument. Is there any quick way of converting my criteria query to an expression?

    Read the article

  • How to generate XML with attributes in c#.

    - by user292815
    I have that code: ... request data = new request(); data.username = formNick; xml = data.Serialize(); ... [System.Serializable] public class request { public string username; public string password; static XmlSerializer serializer = new XmlSerializer(typeof(request)); public string Serialize() { StringBuilder builder = new StringBuilder(); XmlWriterSettings settings = new XmlWriterSettings(); settings.OmitXmlDeclaration = true; settings.Encoding = Encoding.UTF8; serializer.Serialize( System.Xml.XmlWriter.Create(builder, settings ), this); return builder.ToString(); } public static request Deserialize(string serializedData) { return serializer.Deserialize(new StringReader(serializedData)) as request; } } I want to add attributes to some nodes and create some sub-nodes. Also how to parse xml like that: <answer> <player id="2"> <coordinate axis="x"></coordinate> <coordinate axis="y"></coordinate> <coordinate axis="z"></coordinate> <action name="nothing"></action> </player> <player id="3"> <coordinate axis="x"></coordinate> <coordinate axis="y"></coordinate> <coordinate axis="z"></coordinate> <action name="boom"> <1>1</1> <2>2</2> </action> </player> </answer> p.s. it is not a xml file, it's answer from http server.

    Read the article

  • How to generate XML with attributes in .NET?

    - by user292815
    I have that code: ... request data = new request(); data.username = formNick; xml = data.Serialize(); ... [System.Serializable] public class request { public string username; public string password; static XmlSerializer serializer = new XmlSerializer(typeof(request)); public string Serialize() { StringBuilder builder = new StringBuilder(); XmlWriterSettings settings = new XmlWriterSettings(); settings.OmitXmlDeclaration = true; settings.Encoding = Encoding.UTF8; serializer.Serialize( System.Xml.XmlWriter.Create(builder, settings ), this); return builder.ToString(); } public static request Deserialize(string serializedData) { return serializer.Deserialize(new StringReader(serializedData)) as request; } } I want to add attributes to some nodes and create some sub-nodes. Also how to parse xml like that: <answer> <player id="2"> <coordinate axis="x"></coordinate> <coordinate axis="y"></coordinate> <coordinate axis="z"></coordinate> <action name="nothing"></action> </player> <player id="3"> <coordinate axis="x"></coordinate> <coordinate axis="y"></coordinate> <coordinate axis="z"></coordinate> <action name="boom"> <1>1</1> <2>2</2> </action> </player> </answer> p.s. it is not a xml file, it's answer from http server.

    Read the article

  • Selectively intercepting methods using autofac and dynamicproxy2

    - by Mark Simpson
    I'm currently doing a bit of experimenting using Autofac-1.4.5.676, autofac contrib and castle DynamicProxy2. The goal is to create a coarse-grained profiler that can intercept calls to specific methods of a particular interface. The problem: I have everything working perfectly apart from the selective part. I gather that I need to marry up my interceptor with an IProxyGenerationHook implementation, but I can't figure out how to do this. My code looks something like this: The interface that is to be intercepted & profiled (note that I only care about profiling the Update() method) public interface ISomeSystemToMonitor { void Update(); // this is the one I want to profile void SomeOtherMethodWeDontCareAboutProfiling(); } Now, when I register my systems with the container, I do the following: // Register interceptor gubbins builder.RegisterModule(new FlexibleInterceptionModule()); builder.Register<PerformanceInterceptor>(); // Register systems (just one in this example) builder.Register<AudioSystem>() .As<ISomeSystemToMonitor>) .InterceptedBy(typeof(PerformanceInterceptor)); All ISomeSystemToMonitor instances pulled out of the container are intercepted and profiled as desired, other than the fact that it will intercept all of its methods, not just the Update method. Now, how can I extend this to exclude all methods other than Update()? As I said, I don't understand how I'm meant to say "for the ProfileInterceptor, use this implementation of IProxyHookGenerator". All help appreciated, cheers! Also, please note that I can't upgrade to autofac2.x right now; I'm stuck with 1.

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • Attempted to read or write protected memory

    - by Interfector
    I have a sample ASP.NET MVC 3 web application that is following Jonathan McCracken's Test-Drive Asp.NET MVC (great book , by the way) and I have stumbled upon a problem. Note that I'm using MVCContrib, Rhino and NUnit. [Test] public void ShouldSetLoggedInUserToViewBag() { var todoController = new TodoController(); var builder = new TestControllerBuilder(); builder.InitializeController(todoController); builder.HttpContext.User = new GenericPrincipal(new GenericIdentity("John Doe"), null); Assert.That(todoController.Index().AssertViewRendered().ViewData["UserName"], Is.EqualTo("John Doe")); } The code above always throws this error: System.AccessViolationException : Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The controller action code is the following: [HttpGet] public ActionResult Index() { ViewData.Model = Todo.ThingsToBeDone; ViewBag.UserName = HttpContext.User.Identity.Name; return View(); } From what I have figured out, the app seems to crash because of the two assignements in the controller action. However, I cannot see how there are wrong!? Can anyone help me pinpoint the solution to this problem. Thank you.

    Read the article

  • SEO: A whois server that work for .SE domains?

    - by Niels Bosma
    I'm developing a small domain checker and I can't get .SE to work: public string Lookup(string domain, RecordType recordType, SeoToolsSettings.Tld tld) { TcpClient tcp = new TcpClient(); tcp.Connect(tld.WhoIsServer, 43); string strDomain = recordType.ToString() + " " + domain + "\r\n"; byte[] bytDomain = Encoding.ASCII.GetBytes(strDomain.ToCharArray()); Stream s = tcp.GetStream(); s.Write(bytDomain, 0, strDomain.Length); StreamReader sr = new StreamReader(tcp.GetStream(), Encoding.ASCII); string strLine = ""; StringBuilder builder = new StringBuilder(); while (null != (strLine = sr.ReadLine())) { builder.AppendLine(strLine); } tcp.Close(); if (tld.WhoIsDelayMs > 0) System.Threading.Thread.Sleep(tld.WhoIsDelayMs); return builder.ToString(); } I've tried whois servers whois.nic-se.se and whois.iis.se put I keep getting: # Copyright (c) 1997- .SE (The Internet Infrastructure Foundation). # All rights reserved. # The information obtained through searches, or otherwise, is protected # by the Swedish Copyright Act (1960:729) and international conventions. # It is also subject to database protection according to the Swedish # Copyright Act. # Any use of this material to target advertising or # similar activities is forbidden and will be prosecuted. # If any of the information below is transferred to a third # party, it must be done in its entirety. This server must # not be used as a backend for a search engine. # Result of search for registered domain names under # the .SE top level domain. # The data is in the UTF-8 character set and the result is # printed with eight bits. "domain google.se" not found. Edit: I've tried changing to UTF8 with no other result. When I try using whois from sysinternals I get the correct result, but not with my code, not even using SE.whois-servers.net. /Niels

    Read the article

  • Error // Usage: rails new APP_PATH [options] // when running 'rails server'

    - by madphill
    Background info: I'm using GIT to get a repository of a project with Ruby files in it. The project lives in my SITES folder under home directory on my Mac. I have Ruby: 1.8.7 I have just upgraded Rails to: 3.0.3 All I am trying to accomplish is to be able to render localhost.com:3000 in my browser of the GIT project i've already downloaded so I can work on it locally. I ran the command 'rails server' and was returned the message below:: Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -f, [--force] # Overwrite files that already exist -s, [--skip] # Skip files that already exist -p, [--pretend] # Run but do not make any changes -q, [--quiet] # Supress status output Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >