Search Results

Search found 7377 results on 296 pages for 'report viewer'.

Page 228/296 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Software center is broken and can not be repaired or reinstalled

    - by Michal
    When I open the software center, I am told that I can not use it, for it is broken, and needs to be repaired. First I try to do this automatically, as I was offered. I enter a root password, and then the installation fails. installArchives() failed: reconfiguring packages... reconfiguring packages... reconfiguring packages... reconfiguring packages... (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 275048 files and directories currently installed.) Unpacking wine1.4-i386 (from .../wine1.4-i386_1.4-0ubuntu4.1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/wine1.4-i386_1.4-0ubuntu4.1_i386.deb (--unpack): trying to overwrite '/usr/bin/wine', which is also in package wine1.5 1.5.5-0ubuntu1~ppa1~oneiric1+pulse17 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/wine1.4-i386_1.4-0ubuntu4.1_i386.deb Error in function: dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4.1); however: Package wine1.4 is not installed. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured What should I do now? First of all, I've tried reinstalling the center, but it failed due to the same 1.4 dependency as is laid out here. I've googled for help and although I don't understand linux at all, I've tried some suggestions: I've tried editing the dpkg status in /var/lib/dpkg/status which failed because the file could not be found. I've tried purging wine/* but that had unresolved dependencies as well. It's a giant mess.

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Failed 12.04 installation

    - by Rob Sayer
    I tried installing Ubuntu 12.04 today. Not an upgrade, a new installation. It didn't work. My computer specs: Computer: Compaq presario CQ-104CA OS: Windows 7 Home 64 bit CPU: AMD V140 BIOS: latest Graphics: amd m880g with ati mobility radeon hd 4250 Wireless: atheros ar9285 Internal HD:SATA I wasn't connected to the internet at the time ... I know of a number of people who have installed ubuntu unconnected and just updated later. It seemed to go normally until I got to the part where I chose to install dual boot linux/windows. Then, the screen went black and the following test appeared (I left out the [OK]'s): checking battery state starting crash report submission daemon stating cpu interrupts balancing daemon stopping system V runlevel compatibility starting configure network device security stopping configure network device security stopping cold plug devices stopping log initial device creation starting enable remaining boot-time encrypting devices starting configure network device security starting save udev log and update rules stopping save udev log and update rules stopping enable remaining boot-time encrypted block devices checking for running unattended-upgrades acpid: exiting speech-dispatcher disabled: edit /etc/default/speech-disorder At this point, the CD is ejected. Then nothing. If I press the return key, it boots Windows. I don't think that's what's supposed to happen. Thinking the cd media or dvd drive may have been faulty, I downloaded the .iso again and made a bootable USB stick, as per your instructions. This time there was no cryptic crash screen. It just booted windows. I can't find any log files it may have left. Thinking the amd64 version may have been the wrong one, I tried downloading the x86 version. Same thing, both from cd and usb drive. Note I downloaded both files twice. I doubt it was a corrupted d/l. This is supposed to be a simple, transparent install. I went to the time and trouble of looking up my devices and drivers re ubuntu beforehand, and was prepared to do some configuration, though I know someone who has the same wireless device and his worked righted out of the box. But I spent over 3 hours trying to install it with only the above to show for it.

    Read the article

  • Configure Calendar Server 7 to Use the davUniqueId Attribute

    - by dabrain
    Starting with Calendar Server 7 Update 3 (Patch 08) we introduce a new attribute davUniqueId in the davEntity objectclass, to use as the unique identifier.  The reason behind this is quite simple, the LDAP operational attribute nsUniqueId  has been chosen as the default value used for the unique identifier. It was discovered that this choice has a potential serious downside. The problem with using nsUniqueId is that if the LDAP entry for a user, group, or resource is deleted and recreated in LDAP, the new entry would receive a different nsUniqueId value from the Directory Server, causing a disconnect from the existing account in the calendar database. As a result, recreated users cannot access their existing calendars. How To Configure Calendar Server to Use the davUniqueId Attribute? Populate the davUniqueId to the ldap users. You can create a LDIF output file only or (-x option) directly run the ldapmodify from the populate-davuniqueid shell script. # ./populate-davuniqueid -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -b "o=red" -O -o /tmp/out.ldif The ldapmodify might failed like below, in that case the LDAP entry already have the 'daventity' objectclass, in those cases run populate-davuniqueid script without the -O option. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" ldapmodify: Type or value exists (20) In this case the user 'mparis' already have the objectclass 'daventity', ldapmodify do not take care of this DN and just take the next DN (if you start ldapmodify with -c option otherwise it stop's completely) dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: objectclass objectclass: daventity - add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d Even run populate-davuniqueid without -O option or changing the outputfile to dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d The ldapmodify works fine now. The only issue I see here is you need verify which user might need the 'daventity' objectclass as well. On the other hand start without the objectclass and only add the objectclass for the users where you get 'Objectclass violation' report. That's indicate the objectclass is missing. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" Now it is time to change the configuration to use the davuniquid attribute # ./davadmin config modify -o davcore.uriinfo.permanentuniqueid -v davuniqueid It is also needed to modfiy the search filter to use davuniqueid instead of nsuniqueid # ./davadmin config modify -o davcore.uriinfo.subjectattributes -v "cn davstore icsstatus mail mailalternateaddress davUniqueId  owner preferredlanguageuid objectclass ismemberof uniquemember memberurl mgrprfc822mailmember" Afterward IWC Calendar works fine and my test user able to access all his old events.

    Read the article

  • Broken package after update: linux-headers, error brokencount >0

    - by escozul
    Ubuntu 12.04. After an update, I get a red warning icon in the system tray, warning about an error: broken count 0 Opening Update manager, I see that the broken package is linux-headers-3.2.0-33-generic-pae (new install) Specificaly I have my ubuntu on an AspireOne with 8gb internal storage. I tried apt-get clean as suggested in another question on this site, and tried reinstalling the package in Synaptic. I have tried to reboot but to no avail. I have also tried apt-get install --fix-broken and I get the following: sudo apt-get install --fix-broken [sudo] password for elina: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-headers-3.2.0-33-generic-pae The following NEW packages will be installed: linux-headers-3.2.0-33-generic-pae 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded. 1 not fully installed or removed. Need to get 0 B/977 kB of archives. After this operation 11,3 MB of additional disk space will be used. Do you want to continue [Y/n]; y (Reading database ... 437051 files and directories currently installed.) Unpacking linux-headers-3.2.0-33-generic-pae (from .../linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb) ... dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb (--unpack): unable to create `/usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h'): No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried all suggestions I could find: sudo apt-get clean sudo apt-get autoclean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install sudo apt-get install --fix-broken Then I saw that on the error there was a mention about free space. So I did a df -h and the result was: Filesystem Size Used Avail Use% Mounted on /dev/sda1 7,0G 5,5G 1,1G 84% / udev 235M 4,0K 235M 1% /dev tmpfs 97M 816K 96M 1% /run none 5,0M 0 5,0M 0% /run/lock none 242M 352K 242M 1% /run/shm I see that on my root folder I have 1.1Gb free. The broken package is linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb which only takes up 11.3Mb on my hard drive. I'm soooo lost. I really hope there is something I'm missing here. I don't want to go about reformatting this bucket. It's really not worth the time. Any help for fixing this would be hot.

    Read the article

  • Extracting the Layout of all the Data Forms from the Relational Database

    - by RahulS
    Today I came across a question from one of our clients that: "what members are used on each data form WITHOUT having to go through the report generated out of our Planning app". We worked with client on this and reached to a simple query. All the form related information is stored in the following tables: HSP_FORM HSP_FORMOBJ_DEF HSP_FORMOBJ_DEF_MBR HSP_FORM_ATTRIBUTES HSP_FORM_CALCS HSP_FORM_DV_CONDITION HSP_FORM_DV_PM_RULE HSP_FORM_DV_RULE HSP_FORM_DV_USER_IN_PM_RULE HSP_FORM_LAYOUT HSP_FORM_MENUS HSP_FORM_VARIABLES If we want to retrieve just the members included, we can concentrate on: HSP_OBJECT to get the Object_ID for form, Object_Type is 7 for forms. (Ex: Select * from HSP_OBJECT where OBJECT_TYPE = 7) HSP_FORMOBJ_DEF Find the OBJDEF_ID for a particular form HSP_FORMOBJ_DEF_MBR Use the above OBJDEF_ID to find the members: Here the Mbr_ID is the Id of the member and Query_Type is the Function like Idesc, Level0 etc and Sequce is you sequence, And the final table we can use is HSP_FORM_LAYOUT: Layout_Type: 0->Pov 1-> Page, 2->Row, 3->Col, DIM_ID is the dimension ID and Ordinal is position. Here is the Query: SELECT HSP_OBJECT.OBJECT_NAME AS 'Form',  HSP_OBJECT_2.OBJECT_NAME AS 'Dimension',  HSP_OBJECT_1.OBJECT_NAME AS 'Member',  HSP_FORMOBJ_DEF_MBR.QUERY_TYPE FROM  <DatabaseName>.dbo.HSP_FORM_LAYOUT HSP_FORM_LAYOUT,  <DatabaseName>.dbo.HSP_FORMOBJ_DEF HSP_FORMOBJ_DEF,  <DatabaseName>.dbo.HSP_FORMOBJ_DEF_MBR HSP_FORMOBJ_DEF_MBR,  <DatabaseName>.dbo.HSP_MEMBER HSP_MEMBER,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT_1,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT_2 WHERE  HSP_OBJECT.OBJECT_ID = HSP_FORMOBJ_DEF.FORM_ID AND  HSP_FORMOBJ_DEF_MBR.OBJDEF_ID = HSP_FORMOBJ_DEF.OBJDEF_ID AND  HSP_MEMBER.MEMBER_ID = HSP_FORMOBJ_DEF_MBR.MBR_ID AND  HSP_OBJECT_1.OBJECT_ID = HSP_MEMBER.MEMBER_ID AND  HSP_OBJECT_2.OBJECT_ID = HSP_MEMBER.DIM_ID AND  HSP_FORM_LAYOUT.DIM_ID = HSP_MEMBER.DIM_ID AND  HSP_FORM_LAYOUT.FORM_ID = HSP_FORMOBJ_DEF.FORM_ID AND  ((HSP_OBJECT.OBJECT_TYPE=7)) ORDER BY HSP_OBJECT.OBJECT_NAME  Concentrate on Test1 data form and Actual Layout of it as follows: Corresponding Query_type for few of the functions: 9  for Idesc, 3  for Ancestors, -9 for ILvl0Des, 8  for Desc, 4  for IAncestors Its just a basic idea you can do lot on the basis of this. Cheers..!!! Rahul S. http://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Test your internet connection - Emtel Mobile Internet

    After yesterday's report on Emtel Fixed Broadband (I'm still wondering where the 'fixed' part is), I did the same tests on Emtel Mobile Internet. For this I'm using the Huawei E169G HSDPA USB stick, connected to the same machine. Actually, this is my fail-safe internet connection and the system automatically switches between them if a problem, let's say timeout, etc. has been detected on the main line. For better comparison I used exactly the same servers on Speedtest.net. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 31.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Mobile Internet) Speedtest.net result of 31.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Mobile Internet) As you might easily see, there is a big difference in speed between national and international connections. More interestingly are the results related to the download and upload ratio. I'm not sure whether connections over Emtel Mobile Internet are asymmetric or symmetric like the Fixed Broadband. Might be interesting to find out. The first test result actually might give us a clue that the connection could be asymmetric with a ratio of 3:1 but again I'm not sure. I'll find out and post an update on this. It depends on network coverage Later today I was on tour with my tablet, a Samsung Galaxy Tab 10.1 (model GT-P7500) running on Android 4.0.4 (Ice Cream Sandwich), and did some more tests using the Speedtest.net app. The results are actually as expected and in areas with better network coverage you will get better results after all. At least, as long as you stay inside the national networks. For anything abroad, it doesn't really matter. But see for yourselves: Speedtest.net result of 31.05.2013 between Cascavelle and servers in Rose Hill, Mauritius (Emtel - Mobile Internet), Port Louis, Mauritius and Kuala Lumpur, Malaysia It's rather shocking and frustrating to see how the speed on international destinations goes down. And the full capability of the tablet's integrated modem (HSDPA: 21 Mbps; HSUPA: 5.76 Mbps) isn't used, too. I guess, this demands more tests in other areas of the island, like Ebene, Pailles or Port Louis. I'll keep you updated... The question remains: Alternatives? After the publication of the test results on Fixed Broadband I had some exchange with others on Facebook. Sadly, it seems that there are really no alternatives to what Emtel is offering at the moment. There are the various internet packages by Mauritius Telecom feat. Orange, like ADSL, MyT and Mobile Internet, and there is Bharat Telecom with their Bees offer which is currently limited to Ebene and parts of Quatre Bornes.

    Read the article

  • Ubuntu 14.04 : Lost my sound randomly tried a few commands and I think I failed

    - by Marc-Antoine Théberge
    I lost my sound the other day and I tried to delete pulseaudio then reinstall, then I tried to delete it and install alsa, It did not work and I had to reinstall everything; overall bad idea... now I can't have any sound. Should I do a fresh install? I don't know how to boot an usb drive with GRUB... Here's my sysinfo System information report, generated by Sysinfo: 2014-05-28 05:45:58 http://sourceforge.net/projects/gsysinfo SYSTEM INFORMATION Running Ubuntu Linux, the Ubuntu 14.04 (trusty) release. GNOME: 3.8.4 (Ubuntu 2014-03-17) Kernel version: 3.13.0-27-generic (#50-Ubuntu SMP Thu May 15 18:08:16 UTC 2014) GCC: 4.8 (i686-linux-gnu) Xorg: 1.15.1 (16 April 2014 01:40:08PM) (16 April 2014 01:40:08PM) Hostname: mark-laptop Uptime: 0 days 11 h 43 min CPU INFORMATION GenuineIntel, Intel(R) Atom(TM) CPU N270 @ 1.60GHz Number of CPUs: 2 CPU clock currently at 1333.000 MHz with 512 KB cache Numbering: family(6) model(28) stepping(2) Bogomips: 3192.13 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 xtpr pdcm movbe lahf_lm dtherm MEMORY INFORMATION Total memory: 2007 MB Total swap: 1953 MB STORAGE INFORMATION SCSI device - scsi0 Vendor: ATA Model: ST9160310AS HARDWARE INFORMATION MOTHERBOARD Host bridge Intel Corporation Mobile 945GSE Express Memory Controller Hub (rev 03) Subsystem: ASUSTeK Computer Inc. Device 8340 PCI bridge(s) Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation 82801 Mobile PCI Bridge (rev e2) (prog-if 01 [Subtractive decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation 82801 Mobile PCI Bridge (rev e2) (prog-if 01 [Subtractive decode]) ISA bridge Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) Subsystem: ASUSTeK Computer Inc. Device 830f IDE interface Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [IDE mode] (rev 02) (prog-if 80 [Master]) Subsystem: ASUSTeK Computer Inc. Device 830f GRAPHIC CARD VGA controller Intel Corporation Mobile 945GSE Express Integrated Graphics Controller (rev 03) (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 8340 SOUND CARD Multimedia controller Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 831a NETWORK Ethernet controller Qualcomm Atheros AR8121/AR8113/AR8114 Gigabit or Fast Ethernet (rev b0) Subsystem: ASUSTeK Computer Inc. Device 8324

    Read the article

  • ArchBeat Link-o-Rama for November 16, 2012

    - by Bob Rhubart
    X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Leveraging Oracle Scorecard and Strategy Management for Everyday BI Needs "Oracle Scorecard and Strategy Management (OSSM) is built-upon the premise that a scorecard system should not be separate from the BI system, like many comparable tools are today," says author Kevin McGinely. "Instead of a separate application with its own data, its own data definitions, and its own front-end, Oracle made the choice to integrate OSSM directly into OBIEE." Applying BI for personal productivity recognition and gamification | Capgemini Oracle Blog "It is quite obvious that if you want people to participate you need an appealing and intuitive user interface," says Capgemini's Henk Vermeulen in this interesting exploration of gamification in the enterprise. Build and release OSB projects with Maven | Edwin Biemond "With Maven we are able to build and deploy OSB projects," says Oracle ACE Edwin Biemond. "The artifacts generated by Maven called snaphosts and releases can be automatically uploaded to a software repository. These versioned OSB jars can then be downloaded by the OSB Servers and deployed." Biemond shows you how in this detailed technical post. ADF Generator for Dynamic ADF BC and ADF UI | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is an extension of his OOW12 presentation, "Oracle ADF Implementations Around the Globe: Best Practices," and includes the sample application he promised to share. Service-oriented organizations have a head start in the cloud race | ZDNet ZDNet SOA blogger Joe McKendrick offers a snapshot of a recent report Forrester analyst James Staten. Oracle Fusion Middleware Security: X509 Fallback to Form | Debasish BhattacharyaOracle Fusion Middleware A-Team architect Debasish Bhattacharya shares a solution that resulted from brainstorming with colleagues Chris Johnson and Brian Eidelman. "The solution is not very difficult," says Bhattacharya, "though it needs some additional configurations and coding." It's all presented in this detailed post. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. Thought for the Day "Operating systems are like underwear — nobody really wants to look at them." — Bill Joy Source: SoftwareQuotes.com

    Read the article

  • Implementing the Reactive Manifesto with Azure and AWS

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/31/implementing-the-reactive-manifesto-with-azure-and-aws.aspxMy latest Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS has just been published! I’d planned to do a course on dual-running a messaging-based solution in Azure and AWS for super-high availability and scale, and the Reactive Manifesto encapsulates exactly what I wanted to do. A “reactive” application describes an architecture which is inherently resilient and scalable, being event-driven at the core, and using asynchronous communication between components. In the course, I compare that architecture to a classic n-tier approach, and go on to build out an app which exhibits all the reactive traits: responsive, event-driven, scalable and resilient. I use a suite of technologies which are enablers for all those traits: ASP.NET SignalR for presentation, with server push notifications to the user Messaging in the middle layer for asynchronous communication between presentation and compute Azure Service Bus Queues and Topics AWS Simple Queue Service AWS Simple Notification Service MongoDB at the storage layer for easy HA and scale, with minimal locking under load. Starting with a couple of console apps to demonstrate message sending, I build the solution up over 7 modules, deploying to Azure and AWS and running the app across both clouds concurrently for the whole stack - web servers, messaging infrastructure, message handlers and database servers. I demonstrating failover by killing off bits of infrastructure, and show how a reactive app deployed across two clouds can survive machine failure, data centre failure and even whole cloud failure. The course finishes by configuring auto-scaling in AWS and Azure for the compute and presentation layers, and running a load test with blitz.io. The test pushes masses of load into the app, which is deployed across four data centres in Azure and AWS, and the infrastructure scales up seamlessly to meet the load – the blitz report is pretty impressive: That’s 99.9% success rate for hits to the website, with the potential to serve over 36,000,000 hits per day – all from a few hours’ build time, and a fairly limited set of auto-scale configurations. When the load stops, the infrastructure scales back down again to a minimal set of servers for high availability, so the app doesn’t cost much to host unless it’s getting a lot of traffic. This is my third course for Pluralsight, with Nginx and PHP Fundamentals and Caching in the .NET Stack: Inside-Out released earlier this year. Now that it’s out, I’m starting on the fourth one, which is focused on C#, and should be out by the end of the year.

    Read the article

  • Software monetization that is not evil

    - by t0x1n
    I have a free open-source project with around 800K downloads to date. I've been contacted by some monetization companies from time to time and turned them down, since I didn't want toolbar malware associated with my software. I was wondering however, is there a non-evil way to monetize software ? Here are the options as I know them: Add a donation button. I don't feel comfortable with that as I really don't need "donations" - I'm paid quite well. Donating users may feel entitled to support etc. (see the second to last bullet) Add ads inside your application. In the web that may be acceptable, but in a desktop program it looks incredibly lame. Charge a small amount for each download. This model works well in the mobile world, but I suspect no one will go for it on the desktop. It doesn't mix well with open source, though I suppose I could charge only for the binaries (most users won't go to the hassle of compiling the sources). People may expect support etc. after having explicitly paid (see next bullet). Make money off a service / community / support associated with the program. This is one route I definitely don't want to take, I don't want any sort of hassle beyond coding. I assure you, the program is top notch (albeit simple) and I'm not aware of any bugs as of yet (there are support forums and blog comments where users may report them). It is also very simple, documented, and discoverable so I do think I have a case for supplying it "as is". Add affiliate suggestions to your installer. If you use a monetization company, you lose control over what they propose. Unless you can establish some sort of strong trust with the company to supply quality suggestions (I sincerely doubt it), I can't have that. Choosing your own affiliate (e.g. directly suggesting Google Toolbar) is possibly the only viable solution to my mind. Problem is, where do I find a solid affiliate that could actually give value to the user rather than infect his computer with crapware? I thought maybe Babylon (not the toolbar of course, I hate toolbars)?

    Read the article

  • Fraud and Anomaly Detection using Oracle Data Mining YouTube-like Video

    - by chberger
    I've created and recorded another YouTube-like presentation and "live" demos of Oracle Advanced Analytics Option, this time focusing on Fraud and Anomaly Detection using Oracle Data Mining.  [Note:  It is a large MP4 file that will open and play in place.  The sound quality is weak so you may need to turn up the volume.] Data is your most valuable asset. It represents the entire history of your organization and its interactions with your customers.  Predictive analytics leverages data to discover patterns, relationships and to help you even make informed predictions.   Oracle Data Mining (ODM) automatically discovers relationships hidden in data.  Predictive models and insights discovered with ODM address business problems such as:  predicting customer behavior, detecting fraud, analyzing market baskets, profiling and loyalty.  Oracle Data Mining, part of the Oracle Advanced Analytics (OAA) Option to the Oracle Database EE, embeds 12 high performance data mining algorithms in the SQL kernel of the Oracle Database. This eliminates data movement, delivers scalability and maintains security.  But, how do you find these very important needles or possibly fraudulent transactions and huge haystacks of data? Oracle Data Mining’s 1 Class Support Vector Machine algorithm is specifically designed to identify rare or anomalous records.  Oracle Data Mining's 1-Class SVM anomaly detection algorithm trains on what it believes to be considered “normal” records, build a descriptive and predictive model which can then be used to flags records that, on a multi-dimensional basis, appear to not fit in--or be different.  Combined with clustering techniques to sort transactions into more homogeneous sub-populations for more focused anomaly detection analysis and Oracle Business Intelligence, Enterprise Applications and/or real-time environments to "deploy" fraud detection, Oracle Data Mining delivers a powerful advanced analytical platform for solving important problems.  With OAA/ODM you can find suspicious expense report submissions, flag non-compliant tax submissions, fight fraud in healthcare claims and save huge amounts of money in fraudulent claims  and abuse.   This presentation and several brief demos will show Oracle Data Mining's fraud and anomaly detection capabilities.  

    Read the article

  • SSIS Debugging Tip: Using Data Viewers

    - by Jim Giercyk
    When you have an SSIS package error, it is often very helpful to see the data records that are causing the problem.  After all, if your input has 50,000 records and 1 of them has corrupt data, it can be a chore.  Your execution results will tell you which column contains the bad data, but not which record…..enter the Data Viewer. In this scenario I have created a truncation error.  The input length of [lastname] is 50, but the output table has a length of 15.  When it runs, at least one of the records causes the package to fail.     Now what?  We can tell from our execution results that there is a problem with [lastname], but we have no idea WHICH record?     Let’s identify the row that is actually causing the problem.  First, we grab the oft’ forgotten Row Count shape from our toolbar and connect it to the error output from our input query.  Remember that in order to intercept errors with the error output, you must redirect them.     The Row Count shape requires 1 integer variable.  For our purposes, we will not reference the variable, but it is still required in order for the package to run.  Typically we would use the variable to hold the number of rows in the table and refer back to it later in our process.  We are simply using the Row Count as a “Dead End” for errors.  I called my variable RowCounter.  To create a variable, with no shapes selected, right-click on the background and choose Variable.     Once we have setup the Row Count shape, we can right-click on the red line (error output) from the query, and select Data Viewers.  In the popup, we click the add button and we will see this:     There are other fancier options we can play with, but for now we just want to view the output in a grid.  WE select Grid, then click OK on all of the popup windows to shut them down.  We should now see a grid with a pair of glasses on the error output line.     So, we are ready to catch the error output in a grid and see that is causing the problem!  This time when we run the package, it does not fail because we directed the error to the Row Count.  We also get a popup window showing the error record in a grid.  If there were multiple errors we would see them all.     Indeed, the [lastname] column is longer than 15 characters.  Notice the last column in the grid, [Error Code – Description].  We knew this was a truncation error before we added the grid, but if you have worked with SSIS for any length of time, you know that some errors are much more obscure.  The description column can be very useful under those circumstances! Data viewers can be used any time we want to see the data that is actually in the pipeline;  they stop the package temporarily until we shut them.  Also remember that the Row Count shape can be used as a “Dead End”.  It is useful during development when we want to see the output from a dataflow, but don’t want to update a table or file with the data.  Data viewers are an invaluable tool for both development and debugging.  Just remember to REMOVE THEM before putting your package into production

    Read the article

  • Conflict between Change Control and ASL Mapping

    - by Jie Chen
    Yesterday I got one strange report that on Agile 9.3.1.2, adding a Supplier into Item's Supplier tab will always remove all the data from Item.PageTwo.MultiList01 field which is assigned to a User Group list. The detailed problem description is like below. In JavaClient, MultiList01 attribute on Parts class's PageTwo tab is enabled and assigned with User Group list. On WebClient, user created a new Part and assign MultiList01 with two UserGroups: "Global User Group Test1" and "Personal Group_Test1". Then go to Suppliers tab to add three Suppliers. Switch back to Part's TitleBlock, will see MultiList01 loses the User Group data. To confirm if MultiList01 really loses the data or it saves with other wrong data, I need to check the database and find strange data that MultiList01 saves wrong data ",7976911,7976907,7976959,", which are exactly the ID of these three Suppliers. Then I can suspect the Supplier attribute on Suppliers tab must be mapped to MultiList01. However when I check Supplier in JavaClient, the "ASL mapped to" is blank. More interesting thing is the database clearly shows Supplier attribute (Base ID =2000004219) is mapped to 2090, which is PageTwo.MultiList01 Base ID. Till now, we can get a conclusion that Supplier data is really mapped to MultiList01, though we assign MultiList01 to User Group list and Supplier does not set "ASL mapped to". It must be another function which overrides "ASL mapped to" visibility in JavaClient with high priority. That is the "Change Controlled" function. We immediately see "ASL mapped to" with value "MultiList01" when we disable Change Controlled for Multilist01 If one attribute is Change Controlled, Supplier data cannot be mapped to this attribute theoretically because Supplier could be dynamically modified by users, not by Changes. In real situation of Agile 9.3.1.2, it could be a Code Defect. We can imagine the scenario customer met. He setup Parts.PageTwo.Multilist01 assigned with Supplier list, then in Parts.Suppliers.Supplier attribute, he set "ASL mapped to" to "Multilist01". Later company business is changed, so he set Multilist01 with Change Controlled and re-assign with User Group list. He forgot to remove "ASL mapped to" before he did modifications to Multilist01. Finally we know the solution, it depends on real business. If still need to mapping Supplier to Parts.PageTwo attribute, should modify "ASL mapped to" to other one attribute which already has assigned with Suppliers list. If do not need "ASL mapped to" function, should delete the data from database level. We cannot do it from JavaClient UI. delete propertytable where id in (select p.id from propertytable p, nodetable n where p.parentid =n.id and n.inherit=2000004219 and propertyid=794)

    Read the article

  • Thoughts on schemas and schema proliferation

    - by jamiet
    In SQL Server 2005 Microsoft introduced user-schema separation and since then I have seen the use of schemas increase; whereas before I would typically see databases where all objects were in the [dbo] schema I now see databases that have multiple schemas, a database I saw recently had 31 (thirty one) of them. I can’t help but wonder whether this is a good thing or not – clearly 31 is an extreme case but I question whether multiple schemas create more problems than they solve? I have been involved in many discussions that go something like this: Developer #1> “I have a new function to add to the database and I’m not sure which schema to put it in” Developer #2> “What does it do?” Developer #1> “It provides data to a report in Reporting Services” Developer #2> “Ok, so put it in the [reports] schema” Developer #1> “Well I could, but the data will only be used by our Financial reporting folks so shouldn’t I put it in the [financial] schema?” Developer #2> “Maybe, yes” Developer #1> “Mind you, the data is supposed to be used for regulatory reporting to the FSA, should I put it in [regulatory]?” Developer #2> “Err….” You get the idea!!! The more schemas that exist in your database then the more chance that their supposed usages will overlap. I’m left wondering whether the use of schemas is actually necessary. I don’t view really see them as an aid to security because I generally believe that principles should be assigned permissions on objects as-needed on a case-by-case basis (and I have a stock SQL query that deciphers them all for me) so why bother using them at all? I can envisage a use where a database is used to house objects pertaining to many different business functions (which, in itself, is an ambiguous term) and in that circumstance perhaps a schema per business function would be appropriate; hence of late I have been loosely following this edict: If some objects in a database could be moved en masse to another database without the need to remove any foreign key constraints then those objects could legitimately exist in a dedicated schema. I am interested to know what other people’s thoughts are on this. If you would like to share then please do so in the comments. @Jamiet

    Read the article

  • Proper Usage of Arrays and Functions [closed]

    - by Ssegawa Victor
    Can some one help me write a C code that solves the following problem. PROBLEM Consider the faculty registrar who has to process results for 1st year 1st semester students. Students offer five courses CSC 1100, CSK 1101, CSC 1104, CSC 1105 and CSC 1106. The courses have credit units 4,4,4,3 and 3 respectively. Lecturers provide course work and exam marks. For each course, course work constitutes 40% of the final mark while the exam constitutes 60% of the final mark. The role of the registrar is to Compute the final mark for each student for each course. The final mark must be a whole number Compute the grade and grade point of the students for each course they offered. According to senate regulations, grades and grade points are awarded to final marks according to the following criteria Range Grade Grade Point 90 – 100 A+ 5.0 80 – 89 A 5.0 75 – 79 B+ 4.5 70 – 74 B 4.0 65 – 69 C+ 3.5 60 – 64 C 3.0 55 – 59 D+ 2.5 50 – 54 D 2.0 45 – 49 E 1.5 40 – 44 E- 1.0 0 – 39 F 0.0 Put a comment ‘Retake’ to a student for every course where the Grade Point is less than 2.0 Compute the cumulative grade point average CGPA for each student. The senate formula for CGPA is GGPA =(?_(i=1)^(i=N)¦?CU _i×GP _i ?)/(?_(i=1)^(i=N)¦CU i) Put a comment “Progress” for any student whose GGPA is greater than 2 and “Stay Put” on a student whose CGPA is less than 2 You are required to create a c program that considers a class of 25 students and: 1.Initializes an array ‘student’ which stores student names 2.Initializes arrays for course work and exam for each course. ‘cw_csc_1100’ and ‘ex_csc_1100’ store course work and exam marks (respectively) for CSC 1100. The same approach is considered for all other courses 3.Initializes the coursework and exam marks arrays with marks between 0 and 99 4.Write appropriate functions that will generate the final marks, generate grades, generate grade points, generate cumulative grade points, generate comments for students and comments for courses per student 5.Create appropriate arrays for final marks and insert the data there using the appropriate functions 6.Without having to create any extra arrays, use the functions created to generate a report per student that looks like the one bellow. Student Name: Ngubiri Course Unit Final mark Grade Grade Point Course Comment CSC 1100 43 E- 1.0 Retake CSK 1101 50 D 2.0 CSC 1104 59 D+ 2.5 CSC 1105 70 B 4.0 CSC 1106 65 C+ 3.5 CGPA 2.47 Overall Comment Progress NB It is advisable that the indices are used to identify the owners. Eg if student[x] is John, then cs_csc_100[x] should be a mark for John since the index is the same

    Read the article

  • my probleme about "The installation or removal of a software package failed"

    - by tulipelle
    Recently, when I open Ubuntu software center, it ask me repair package Then I found this message . installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 569135 files and directories currently installed.) Unpacking linux-image-3.5.0-42-generic (from .../linux-image-3.5.0-42-generic_3.5.0-42.65~precise1_amd64.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65~precise1_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./boot/vmlinuz-3.5.0-42-generic': No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65~precise1_amd64.deb Error in function: dpkg: dependency problems prevent configuration of linux-image-generic-lts-quantal: linux-image-generic-lts-quantal depends on linux-image-3.5.0-42-generic; however: Package linux-image-3.5.0-42-generic is not installed. dpkg: error processing linux-image-generic-lts-quantal (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic-lts-quantal: linux-generic-lts-quantal depends on linux-image-generic-lts-quantal; however: Package linux-image-generic-lts-quantal is not configured yet. dpkg: error processing linux-generic-lts-quantal (--configure): dependency problems - leaving unconfigured

    Read the article

  • apt-get upgrade stuck at the same package (openjdk-6-jre-headless)

    - by decibyte
    I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do?

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • As a tooling/automation developer, can I be making better use of OOP?

    - by Tom Pickles
    My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc. I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end. My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns. Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP? EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers. public static class MgrClass { public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); } } public static class APIClass { public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example } }

    Read the article

  • ITT Corporation Goes Live on Oracle Sales and Marketing Cloud Service (Fusion CRM)!

    - by Richard Lefebvre
    Back in Q2 of FY12, a division of ITT invited Oracle to demo our CRM On Demand product while the group was considering Salesforce.com. Chris Porter, our Oracle Direct sales representative learned the players and their needs and began to develop relationships. We lost that deal, but not Chris's persistence. A few months passed and Chris called on the ITT Shape Cutting Division's Director of Sales to see how things were going. Chris was told that the plan was for the division to buy more Salesforce.com. In fact, he informed Chris that he had just sent his team to Salesforce.com training. During the conversation, Chris mentioned that our new Oracle Sales Cloud Service could run with Outlook. This caused the ITT Sales Director to reconsider the plan to move forward with our competition. Oracle was invited back to demo the Oracle Sales and Marketing Cloud Service (Fusion CRM) and after it concluded, the Director stated, "That just blew your competition away." The deal closed on June 5th , 2012 Our Oracle Platinum Partner, Intelenex, began the implementation with ITT on July 30th. We are happy to report that on September 18th, the ITT Shape Cutting Division successfully went live on Oracle Sales and Marketing Cloud Service (Fusion CRM). About: ITT is a diversified leading manufacturer of highly engineered critical components and customized technology solutions for growing industrial end-markets in energy infrastructure, electronics, aerospace and transportation. Building on its heritage of innovation, ITT partners with its customers to deliver enduring solutions to the key industries that underpin our modern way of life. Founded in 1920, ITT is headquartered in White Plains, NY, with 8,500 employees in more than 30 countries and sales in more than 125 countries. The ITT Shape Cutting Division provides plasma lasers and controls with the Burny, Kaliburn, and AMC brands. Oracle Fusion Products: Oracle Sales and Marketing Cloud Service (Fusion CRM) including: • Fusion CRM Base • Fusion Sales Cloud • Fusion Mobile and Desktop Integration • Automated Forecasting Adoption Model: SaaS Partner: Intelenex Business Drivers: The ITT Shape Cutting Division wanted to: better enable its Sales Force with email and mobile CRM capabilities simplify and automate its complex sales processes centrally manage and maintain customer contact information Why We Won: ITT was impressed with the feature-rich capabilities of Oracle Sales and Marketing Cloud Service (Fusion CRM), including sales performance management and integration. The company also liked the product's flexibility and scalability for future growth. Expected Benefits: Streamlined accurate forecasting Increased customer manageability Improved sales performance Better visibility to customer information

    Read the article

  • No GRUB Screen or recovery mode on Boot after 12.04 Upgrade

    - by Nick
    I tried the live boot CD and boot-repair, also loaded the Desktop install CD, and it looks like all partitions check out OK. However, when I try to boot Linux (the only bootable partition on the computer) I get a blank screen. Every so often the screen give me something akin to: Assuming write through cache Asking for cache data failed it appears to start booting, then hangs. Ctrl+Alt+Delete shuts down the machine The last message during boot is "STarting TiMidity++ ALSA midi emulation... [OK]" I used boot-repair to generate a boot info report. One thing looks odd to me- it reports a missing core.img on /dev/sda1. Here is the full info: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info August 2nd 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. = Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda1 and looks at sector 18406911 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/extlinux/extlinux.conf /boot/grub/core.img sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 307,339,514 307,339,452 83 Linux /dev/sda2 307,339,515 312,576,704 5,237,190 5 Extended /dev/sda5 307,339,578 312,576,704 5,237,127 82 Linux swap / Solaris Drive: sdb _______________________________________ Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 625,142,447 625,140,400 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 11b4d633-7863-40b2-a6ca-da5f82c3ad0b ext4 /dev/sda5 cb8d65f4-8cf9-4088-b804-e3dea2151033 swap /dev/sdb1 349E7C109E7BC8BE ntfs Personal1 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sdb1 /media/Personal1 fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sr0 /live/image iso9660 (ro,noatime) ...(a bunch of config file info- let me know if anyone wants to see it!) But usually I just get "Cannot Display This Video Mode", which I know means the video output is not usable by the monitor. I'm looking for a way to get into a recovery mode.I'd really like to avoid wiping the drive. Any thoughts?

    Read the article

  • Agile Documentation

    - by Nick Harrison
    We all know that one of the premises of the agile manifesto is to value Working Software over Comprehensive Documentation. This is a wonderful idea and it takes a tremendous burden off of project implementations. I have seen as many projects fail because of the maintenance weight of the project documentations as I have for any reason. But this goal as important as it is may not always be practical. Sometimes the client will simply insist on tedious documentation despite the arguments against it. This may be to calm a nervous client. This may be to satisfy an audit / compliance requirement. This may be a non-too subtle attempt at sabotaging the project. Ok, it is probably not an all out attempt to sabotage the project, but it will probably feel that way. So what can we do to keep to the spirit of the Agile Manifesto but still meet the needs of the client wanting the documentation? This is a good question that I have been puzzling over lately! I hope to explore some possible answers more fully here. A common theme that my solutions are likely to follow is the same theme that I often follow with simplifying complex business logic. Make it table driven! My thought is that the sought after documentation could be a report or reports out of a metadata repository. Reports are much easier to maintain than hand written documentation. Here are a few additional advantages that we can explore over time: Reports will take advantage of the fact that different people have different needs and different format requirements Reports and the supporting metadata are more easily validated and the validation can be automated. If the application itself uses this metadata than there never has to be a question as to whether or not the metadata is up to date. It is up to date or the application would not work. In many cases we should be able to automatically gather most of the Meta data that we need using reflection, system tables, etc. I think that this will lower the total cost of ownership for the documentation and may provide something useful beyond having a pretty document to look at.  What are your thoughts?

    Read the article

  • Revenue Recognition: Performance Obligation Pass a Hurdle

    - by Theresa Hickman
    I met up with Seamus Moran, our resident accounting expert, to get his thoughts about the latest happenings with IFRS. Last week, on March 13,  the comment period on the FASB and IASB exposure draft “Revenue From Contracts with Customers” closed.  FASB and IASB have just over 20 comment letters – a very small number.  The implication is that that the exposure draft does reflect general acceptance, and therefore will be published as both a US and Internationally Generally Accepted Accounting Standard. At a recent conference call, FASB and IASB expected to complete their report to both Boards on the comments by early summer, complete their deliberation of the comments by the fall and draft the final standard text by late this year. It is assumed the concept of Performance Obligations would become US GAAP and IFRS in place of the existing standards.  They confirmed that all existing US GAAP and IFRS guidelines would be withdrawn, and that they were in dialogue with the SEC on withdrawing the SEC guidelines on the revenue issue as well.The open question is when will Performance Obligations become effective?  The Boards have said that they would like this Revenue Recognition standard and the the Lease Accounting standard to be effective at the same time because what isn’t either insurance, interest, or a lease is a revenue arrangement.  However, ascertaining what is generally acceptable in respect of Leases is proving a little elusive, and the Boards have recently diverged a little on the P&L side of the accounting (although both are in agreement that there will be no off-balance sheet leases).  It is therefore likely that the Lease standard might be delayed. One wonders if the Boards will  define effectivity of the Revenue standard independently of the Lease standard or if they will stick with their resolve to make them co-effective.  The Boards have also said that neither standard will be effective before June 2015.Here is the gist of the new Revenue Recognition principle and the steps to apply it:Recognize revenue to depict the transfer of goods or services in an amount that reflects the consideration expected to be entitled in exchange for those goods and services.Steps to apply the core principles: Identify the contract with the customer Identify the separate performance obligations Determine the transaction price Allocate the the transaction price Recognize Revenue when a performance obligation is satisfied  

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >