Search Results

Search found 16331 results on 654 pages for 'option'.

Page 495/654 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • My Website (ports) Have Been Hijacked!

    - by ChrisD
    This is one of the scary problems that turns out to have a pretty easy solution. I tried to view one of my websites hosted by IIS on my primary workstation and the site wouldn’t render.   I checked IIS Admin and the site was there, but I couldn’t access it on either port 443 or port 80. In reviewing the event log I found the following entry: The World Wide Web Publishing Service (WWW Service) did not register the URL prefix http://x.x.x.x:80/ for site 1. The site has been disabled. The data field contains the error number I disabled the IIS Service (issued Net Stop W3svc from an admin command prompt) and then scanned for anything listening on port 80. C:\Users\cdarrigo>netstat -ano |findstr 80   TCP    0.0.0.0:80             0.0.0.0:0              LISTENING       3124 This confirmed that something had hijacked my ports.  I had another process that was listening on port 80 and it was preventing IIS from serving up my site.   A quick phone call to a friend taught me that the last number shown above (3124) is the process id of the process that's listening on the port.  So whatever process had PID 3124 had to be stopped. I scanned my process list, and determined it was, much to my surprise, Skype.  I exited the Skype application and restarted the IIS service, then manually restarted the web site.  This time, browsing to my site resulted in successfully viewing my site. So why was Skype listening on those ports?  A quick Google search revealed the answer: “Skype listens on those ports to increase quality.” really? “you might become a supernode if those ports are open.” No thanks.  I’m not sure how accurate those statements are, but I want to disable this behavior in Skype none the less. Fortunately Skype provides a configuration option to turn off this behavior.   Launch Skype and log in.  From the Tools menu, select Options Select the Advanced options and then Connection Uncheck the box Use Port 80 and 443 as alternatives for incoming connections Back to development bliss.

    Read the article

  • Using ext4 in VMware machine

    First of all, using a journaling filesystems like NTFS, ext4, XFS, or JFS (not to name all of them) is a very good idea and nowadays unthinkable not to do. Linux offers a good variety of different option as journaling filesystem for your system. Since years I am using SGI's XFS and I am pretty confident with stability, performance and liability of the system. In earlier years I had to struggle with incompatibilities between XFS and the boot loader. Using an ext2 formatted /boot solved this issue. But, wow, that is ages ago! Lately, I had to setup a fresh Lucid Lynx (Ubuntu 10.04 LTS) system for a change of our internal groupware / messaging system. Therefore, I fired up a new virtual machine with almost standard configuration in VMware Server and run through our network-based PXE boot and installation procedure. At a certain step in this process, Ubuntu asks you about the partitioning of your hard drive(s). Honestly, I have to say that only out of curiousity I sticked to the "default" suggestion and gave my faith and trust into the Ubuntu installation routine... Resulting to have an ext4 based root mount point ( / ). The rest of the installation went on without further concerns or worries. Note:I really can't remember why I chose to go away from my favourite... Well, it should turn out to be the wrong decision after all. Ok, let's continue the story about ext4 in a VMware based virtual machine. After some hours installing additional packages and configuring the new system using LDAP for general authentication and login, I had an "out-of-the-box" usable enterprise messaging system based on Zarafa 6.40 Community Edition inclusive proper SSL-based Webaccess interface and Z-Push extension for ActiveSync with my Nokia mobile. Straightforward and pretty nice for the time spent on the setup. Having priority on other tasks I let the system just running and didn't pay any further attention at all. Until I run into an upgrade of "Mail for Exchange" on Symbian OS. My mobile did not bother me at all with the upgrade and everything went smooth, but trying to re-establish the ActiveSync connection to the Zarafa messaging system resulted in a frustating situation. So, I shifted my focus back to the Linux system and I was amazed to figure out that the root had been remounted readonly due to hard drive failures or at least ext4 reported errors. Firing up Google only confirmed my concerns and it seems that using ext4 for VMware based virtual machines does not look like a stable and reliable candidate to me. You might consider reading those external resources: ext4 fs corruption under VMWare Server 2.01Bug #389555 - ext4 filesystem corruption Well, I learned my lesson and ext{2|3|4} based filesystems are not going to be used on any of my Linux systems or customer installations in the future. Addendum: I did not try this setup in other virtualization environments like VirtualBox, qemu, kvm, Xen, etc.

    Read the article

  • Xorg.conf (nvidia) Second Monitor getting settings of first

    - by HennyH
    I've been spending the weekend (and some time before that) trying to set up my Korean QHD270 and Benq G2222HDL monitors with Ubuntu 13.10. With the nouveau drivers install both monitor function perfectly fine. After installing the nvidia drivers the Benq works but the QHD270 does not. Now, after days of struggling I managed to get the QHD270 to work following a mixture of blogs, particularly; this one and learnitwithme. Now, unfortunatly my G2222HDL does not work. I fixed the QHD270 by supplying a custom EDID, my xorg.conf looks like so (excluding keyboard and mouse): Section "ServerLayout" Identifier "Layout0" Screen "Default Screen" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Device" Identifier "Configured Video Device" Driver "nvidia" Option "CustomEDID" "DFP:/etc/X11/edid-shimian.bin" EndSection Section "Screen" Identifier "Default Screen" Device "Configured Video Device" Monitor "Configured Monitor" EndSection Now, I tried defining a new Device,Monitor and Screen then in ServerLayout adding Screen "Second Screen" RightOf "Default Screen", but after doing so neither monitor worked. Hoping to fix the issue using a GUI based tool I opened up NVIDIA X Server Settings, which shows my current layout as: It seems that something is being output to the monitor, as suggested by my print screen: Any help would be greatly appreciated. Output of xrandr: Screen 0: minimum 8 x 8, current 5120 x 1440, maximum 16384 x 16384 DVI-I-0 disconnected (normal left inverted right x axis y axis) DVI-I-1 connected primary 2560x1440+0+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ HDMI-0 disconnected (normal left inverted right x axis y axis) DP-0 disconnected (normal left inverted right x axis y axis) DVI-D-0 connected 2560x1440+2560+0 (normal left inverted right x axis y axis) 597mm x 336mm 2560x1440 60.0*+ DP-1 disconnected (normal left inverted right x axis y axis) And an extract from my log file (perhaps this is relevant?) [ 7.862] (--) NVIDIA(0): Valid display device(s) on GeForce GTX 680 at PCI:2:0:0 [ 7.862] (--) NVIDIA(0): CRT-0 [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0) (boot, connected) [ 7.862] (--) NVIDIA(0): DFP-1 [ 7.862] (--) NVIDIA(0): DFP-2 [ 7.862] (--) NVIDIA(0): DFP-3 [ 7.862] (--) NVIDIA(0): DFP-4 [ 7.862] (--) NVIDIA(0): CRT-0: 400.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): ACB QHD270 (DFP-0): Internal Dual Link TMDS [ 7.862] (--) NVIDIA(0): DFP-1: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-1: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-2: 165.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-2: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-3: 330.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-3: Internal Single Link TMDS [ 7.862] (--) NVIDIA(0): DFP-4: 960.0 MHz maximum pixel clock [ 7.862] (--) NVIDIA(0): DFP-4: Internal DisplayPort

    Read the article

  • VirtualBox 4.2.14 is now available

    - by user12611829
    The VirtualBox development team has just released version 4.2.14, and it is now available for download. This is a maintenance release for version 4.2 and contains quite a few fixes. Here is the list from the official Changelog. VMM: another TLB invalidation fix for non-present pages VMM: fixed a performance regression (4.2.8 regression; bug #11674) GUI: fixed a crash on shutdown GUI: prevent stuck keys under certain conditions on Windows hosts (bugs #2613, #6171) VRDP: fixed a rare crash on the guest screen resize VRDP: allow to change VRDP parameters (including enabling/disabling the server) if the VM is paused USB: fixed passing through devices on Mac OS X host to a VM with 2 or more virtual CPUs (bug #7462) USB: fixed hang during isochronous transfer with certain devices (4.1 regression; Windows hosts only; bug #11839) USB: properly handle orphaned URBs (bug #11207) BIOS: fixed function for returning the PCI interrupt routing table (fixes NetWare 6.x guests) BIOS: don't use the ENTER / LEAVE instructions in the BIOS as these don't work in the real mode as set up by certain guests (e.g. Plan 9 and QNX 4) DMI: allow to configure DmiChassisType (bug #11832) Storage: fixed lost writes if iSCSI is used with snapshots and asynchronous I/O (bug #11479) Storage: fixed accessing certain VHDX images created by Windows 8 (bug #11502) Storage: fixed hang when creating a snapshot using Parallels disk images (bug #9617) 3D: seamless + 3D fixes (bug #11723) 3D: version 4.2.12 was not able to read saved states of older versions under certain conditions (bug #11718) Main/Properties: don't create a guest property for non-running VMs if the property does not exist and is about to be removed (bug #11765) Main/Properties: don't forget to make new guest properties persistent after the VM was terminated (bug #11719) Main/Display: don't lose seamless regions during screen resize Main/OVF: don't crash during import if the client forgot to call Appliance::interpret() (bug #10845) Main/OVF: don't create invalid appliances by stripping the file name if the VM name is very long (bug #11814) Main/OVF: don't fail if the appliance contains multiple file references (bug #10689) Main/Metrics: fixed Solaris file descriptor leak Settings: limit depth of snapshot tree to 250 levels, as more will lead to decreased performance and may trigger crashes VBoxManage: fixed setting the parent UUID on diff images using sethdparentuuid Linux hosts: work around for not crashing as a result of automatic NUMA balancing which was introduced in Linux 3.8 (bug #11610) Windows installer: force the installation of the public certificate in background (i.e. completely prevent user interaction) if the --silent command line option is specified Windows Additions: fixed problems with partial install in the unattended case Windows Additions: fixed display glitch with the Start button in seamless mode for some themes Windows Additions: Seamless mode and auto-resize fixes Windows Additions: fixed trying to to retrieve new auto-logon credentials if current ones were not processed yet Windows Additions installer: added the /with_wddm switch to select the experimental WDDM driver by default Linux Additions: fixed setting own timed out and aborted texts in information label of the lightdm greeter Linux Additions: fixed compilation against Linux 3.2.0 Ubuntu kernels (4.2.12 regression as a side effect of the Debian kernel build fix; bug #11709) X11 Additions: reduced the CPU load of VBoxClient in drag'and'drop mode OS/2 Additions: made the mouse wheel work (bug #6793) Guest Additions: fixed problems copying and pasting between two guests on an X11 host (bug #11792) The full changelog can be found here. You can download binaries for Solaris, Linux, Windows and MacOS hosts at http://www.virtualbox.org/wiki/Downloads Technocrati Tags: Oracle Virtualization VirtualBox

    Read the article

  • Ubuntu 12.04 won't load - hangs at Busybox v1.18.5 / initramfs

    - by Marty
    I want to start by saying that I am very new with Linux (about 1 month using it). I have had no problems up until now. I am running Ubuntu 12.04 from a Toshiba laptop with 250 GB hard drive and 3 GB of ram. Everything worked fine yesterday. The only changes I made was was that I downloaded Banshee to try as a replacement for Rhythmbox and did a few recommended updates. This morning I tried to boot and it took a long time and I finally got this error: mount: mounting /dev/disk/by-uuid/02bc41cc-1e21-4700-a179-be2805a658c4 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have requested /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.18. (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands (initramfs) I'm not sure what to do beyond this point. I have read around on here and haven't found the help I need. I did try to boot it from the Live CD. I can boot up to the Try Ubuntu/Install Ubuntu screen. When I go through the Try Ubuntu selection I can't access my hard disk. When I clicked on it I got this error: Unable to mount 247 GB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda/1, missing codepage or helper program, or other error. In some cases useful info is found in syslog - try dmesg|tail or so. I tried dmesg|tail and saw a string of values but nothing that looked helpful. I have also tried to boot from the GRUB screen as recovery mode and previous Linux version but they didn't work either. I tried to load Windows Recovery Environment (loader) (on /dev/sdc3) and got this message: error: no such device: 268057B1805785E9 error: hd1 cannot get C/H/S values I had saw somewhere that I could fix this with the Live CD but my knowledge isn't good enough to try. I tried something with Gpart that I had read, but the system told me that I didn't have Gpart. Could someone please explain to me what I need to do and/or haven't tried yet. Thanks so much!

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • Networking works on Live CD but not after installation.

    - by user9054
    Hi Friends, I have got this issue with Ubuntu 10.10. I have been with ubuntu 8.04 and then decided to try out ubuntu 10.10 . I booted with a LiveCD and was able to configure the wireless network painlessly using the livecd, so happily I installed ubuntu 10.10. As soon as ubuntu came up it detected the wireless network and i was able to assign a static IP to eth1 (i dont use DHCP option on my ADSL router) and enter a wap key and use pppoeconf to configure the dialer. The net was on and i was able to surf the net. All hunky dory so far. However on the next boot the fun started . It did not detect the wireless network. I could not see the network manager icon in the systray. I used ifconfig and saw that the entry for eth1 was missing. I used ifup eth1 and it said that eth1 was already up. Then i installed wifi-radar. Wifi-Radar detected the wireless network. I configured wifi-radar for the detected wireless network , set the wap driver as wext and used the manual IP settings. However on clicking connect wifi-radar started looking for a DHCP IP. I cannot understand why wifi-radar is using DHCP when I have specified manual settings. Then I decided to use the wired network to surf the net looking for a solution. So I plugged in the network cable from my modem , configured eth0 , used pppoeconf and connected the net. Then I foolishly decided to reboot my PC. And wonders of wonders , the same problem appeared. I cannot see eth0 in my ifconfig anymore. I used pon and it said something about network error or something. Now my ifconfig shows only lo .Can anybody help me on this. PS : 1) I tried linux mint 10 and had the same issue . on rebooting wireless network was not getting detected. 2) I have made myself the administrator so that there is no issue of rights or anything.

    Read the article

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • Paypal PDT and IPN , how does it work?

    - by slow diver
    PDT Payment Data Transfer is getting the transaction data of the purchase that was made on paypal site and you want to fetch that on your own site and display to the user. Also you may want to store it in your database for archive and tracking purposes. But I cannot exactly follow the documentation here What I am not getting is Once you have activated PDT, every time a buyer makes a website payment and is redirected to your return URL, a transaction token will be passed along as a "GET" variable to this return URL. In order to properly use PDT and display transaction details to your customer, you should fetch the transaction token, variable name "tx", and retreive transaction details from PayPal by constructing an HTTP POST to PayPal. Your POST should be sent to https://www.paypal.com/cgi-bin/webscr. You must post the transaction token using the variable "tx" and the value of the transaction token previously received (e.g. "tx=transaction_token"), and the special identity token using the variable at and the value of your PDT identity token (e.g. "at=identity_token"). You will also need to append a variable named "cmd" with the value "_notify-synch", for example "cmd=_notify-synch", to the POST string. IPN I have setup Instant Payment Notification through setting according to this documentation. This is basically logging into your paypal account and enable IPN while specifying a url where the notification will be sent. This is used to complete an order so that the product can be shipped. What I did is setup a PHP page. I have created a table and whenever that page is called (or hit), it registers an entry in the table so I know a notification came from Paypal. But it does not work either. What am I really doing wrong? The first thing I want to trouble shoot though is when the buyer pays the amount, he is automatically redirected to my site. I have enabled this but automatic redirection just does not work. Instead he is shown the url as an option after payment confirmation is shown. Can someone guide my how the PDT process goes? Where do I make the request for PDT, is it along the very first request (Buy Now button) or it is sent later? Addition I found some good sampling code of how everything should work but it still does not work. I use this code http://officetrio.com/modules/free-php-paypal-ipn-script.php for IPN. I am using this for PDT. This one uses SSL, I changed SSL to regular HTTP (copied paypal version), still does not work. http://ykyuen.wordpress.com/2010/02/17/paypal-payment-data-transfer-sample-code/

    Read the article

  • NVidia with Optimus conflicting in Ubuntu 12.04

    - by Humannoise
    i have recently installed Ubuntu 12.04 in a Intel Ivy Bridge with integrated graphics and NVidia GPU with Optimus tech, however i cant manage it to work properly. I have already passed by the solution of bumblebee project, however iam got the following message when try to run anything with nvidia card( e.g. with optirun firefox): [ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect. [ERROR]Could not connect to bumblebee daemon - is it running? Since the nvidia card is not working properly, some softwares like Scilab, that make use of X11 system for graphic handling and plotting, wont work too. my bios has no option concerning graphics card and the log of daemon returned: Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[980]: Module 'nvidia' is not found. Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943272] init: bumblebeed main process (980) terminated with status 1 Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ kernel: [ 17.943288] init: bumblebeed main process ended, respawning Jul 5 16:10:51 humannoise-W251ESQ-W270ESQ bumblebeed[1026]: Module 'nvidia' is not found. The lspci -nn | grep '\[030[02]\]:' returned: 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:0de9] (rev a1) Ok, for the command dpkg -l | grep '^ii' | grep nvidia i got : ii bumblebee-nvidia 3.0-2~preciseppa1 nVidia Optimus support using the proprietary NVIDIA driver ii nvidia-current 302.17-0ubuntu1~precise~xup1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-current-updates 295.49-0ubuntu0.1 NVIDIA binary Xorg driver, kernel module and VDPAU library ii nvidia-settings 302.17-0ubuntu1~precise~xup3 Tool of configuring the NVIDIA graphics driver ii nvidia-settings-updates 295.33-0ubuntu1 Tool of configuring the NVIDIA graphics driver After full reinstallation, including the remove of any previous nvidia drive, lsmod | grep -E 'nvidia|nouveau' returned: nvidia 10888310 46 dmesg | grep -C3 -E 'nouveau|NVRM' returned things like: [ 1875.607283] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1875.607289] nvidia 0000:01:00.0: setting latency timer to 64 [ 1875.607293] vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=none [ 1875.607363] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 302.17 Tue Jun 12 16:03:22 PDT 2012 [ 1884.830035] nvidia 0000:01:00.0: PCI INT A disabled [ 1884.832058] bbswitch: disabling discrete graphics [ 1884.832960] bbswitch: Result of Optimus _DSM call: 09000019 Some programs, like Scilab, are now working fine under optirun(e.g. >optirun scilab) call. Thank you.

    Read the article

  • The challenge of giving a positive No

    - by MarkPearl
    I find it ironic that the more I am involved in the software industry, the more apparent it becomes that soft skills are just as if not more important than the technical abilities of a developer. One of the biggest challenges I have faced in my career is in managing client expectations to what one can deliver and being able to work with multiple clients. If I look at where things commonly go pear shaped, one area features a lot is where I should have said "No" to a request, but because of the way the request was made I ended up saying yes. Time and time again this has caused immense pain. Thus, when I saw on Amazon that they had a book titled "The power of a positive no" by William Ury I had to buy it and read it. In William's book he explains an approach to saying No that while extremely simple does change the way a No is presented. In essence he talks of a pattern the Yes! > No > Yes? Pattern. 1. Yes! -> positively and concretely describing your core interests and values 2. No. -> explicitly link your no to this YES! 3. Yes? -> suggest another positive outcome or agreement to the other person Let me explain how I understood it. If you are working on a really important project and someone asks you to do add a quick feature to another project, your Yes! would be to the more important project, which would mean a No to the quick feature, and an option for your Yes? may be an alternative time when you can look at it.. An example of an appropriate response would be... It is really important that I keep to the commitment that I made to this customer to finish his project on time so I cannot work on your feature right now but I am available to help you in a weeks time. William then goes on to explain the type of behaviour a person may display when the no is received. He illustrates this with a diagram called the curve of acceptance. William points out that if you are aware of the type of behaviour you can expect it empowers you to stay true to your no. Personally I think reading and having an understanding of the “soft” side of things like saying no is invaluable to a developer.

    Read the article

  • When following SRP, how should I deal with validating and saving entities?

    - by Kristof Claes
    I've been reading Clean Code and various online articles about SOLID lately, and the more I read about it, the more I feel like I don't know anything. Let's say I'm building a web application using ASP.NET MVC 3. Let's say I have a UsersController with a Create action like this: public class UsersController : Controller { public ActionResult Create(CreateUserViewModel viewModel) { } } In that action method I want to save a user to the database if the data that was entered is valid. Now, according to the Single Responsibility Principle an object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. Since validation and saving to the database are two separate responsibilities, I guess I should create to separate class to handle them like this: public class UsersController : Controller { private ICreateUserValidator validator; private IUserService service; public UsersController(ICreateUserValidator validator, IUserService service) { this.validator = validator; this.service= service; } public ActionResult Create(CreateUserViewModel viewModel) { ValidationResult result = validator.IsValid(viewModel); if (result.IsValid) { service.CreateUser(viewModel); return RedirectToAction("Index"); } else { foreach (var errorMessage in result.ErrorMessages) { ModelState.AddModelError(String.Empty, errorMessage); } return View(viewModel); } } } That makes some sense to me, but I'm not at all sure that this is the right way to handle things like this. It is for example entirely possible to pass an invalid instance of CreateUserViewModel to the IUserService class. I know I could use the built in DataAnnotations, but what when they aren't enough? Image that my ICreateUserValidator checks the database to see if there already is another user with the same name... Another option is to let the IUserService take care of the validation like this: public class UserService : IUserService { private ICreateUserValidator validator; public UserService(ICreateUserValidator validator) { this.validator = validator; } public ValidationResult CreateUser(CreateUserViewModel viewModel) { var result = validator.IsValid(viewModel); if (result.IsValid) { // Save the user } return result; } } But I feel I'm violating the Single Responsibility Principle here. How should I deal with something like this?

    Read the article

  • 24HOP gets off to a good start

    - by Rob Farley
    Session 11 is on as I write this – Ami Levin presenting about Primary Keys. It’s a good session. But actually, they’ve all been excellent so far, not just Ami’s. I’ve heard only good things about the content. So if you’re reading this and 24HOP is still on, then tune in and take part. If it’s finished, get yourself over to http://sqlpass.org/24hours and see if the sessions have been made available on-demand. Yes – you should be able to watch the sessions when you want to for a year. Watching live is best, because you can ask questions and have them answered during the session, but if there are ones you just couldn’t make, then watching them on-demand is a good option. Numbers have been “not bad”. At the moment it’s still the middle of the night for most Americans – about 6:30am in New York, and yet we’ve had well over a hundred at all the sessions so far, getting up to well over 300 for some sessions. And when I look through the list of names, I see a bunch of names that suggest we’re reaching people from all around the world. I’m seriously looking forward to seeing the stats about which countries have been represented in the audiences. There have been a few comments about the platform. Everyone seems to consider IBTalk an improvement on LiveMeeting, but the closed captioning has met a mixed reception. Some people are loving it, whereas other people are finding the translations leave quite a bit of space for improvement. If you have feedback on this, please feel free to drop me an email (my name with an underscore at hotmail.com, or with a dot at sqlpass.org should reach me just fine, or Twitter, etc). I don’t know how many of the sessions I’ll get to watch overnight – but I’m looking forward to seeing how things go as the day progresses. Big thanks to everyone who’s involved – the sponsors, PASS HQ team and the IBTalk folk who have stayed up overnight to facilitate, plus the moderators, the people doing the live captioning, and of course the speakers and attendees. I love how the SQL Community gets behind things like this. Earlier, the Adelaide SQL Server User Group gathered and watched Denny Lee’s session on BigData, and everyone in the group agreed that it worked really well. I took a picture of our cinema room, although you could only see a small section of the audience. @rob_farley

    Read the article

  • Cannot boot into system after deleting partition

    - by Clayton
    Okay...so this was kind of a stupid thing for me to do now that I think back on it. I was experiencing a ton of lag and not as much memory that I could use after installing Ubuntu 12.04. So after remembering I had installed multiple server versions of Ubuntu 12.04 by mistake, I went into Disk Management and proceeded to delete each and every one. Everything went fine. Up until this week, I have not experienced any problems. But starting yesterday I began to get lag just as I had before, and nothing fixed the problem. I decided to remove the Ubuntu partition, since I was also experiencing a visual error when given the option to select one to boot(the screen doesn't come up at all, and I recieved a monitor resolution error instead, but could still access both Windows and Ubuntu via arrow keys). After deleting the Ubuntu partition, so that I could see if running just Windows would fix the problem, I proceeded with what I was doing, installing a few programs that were not tied to my prediciment in any way. Upon rebooting my desktop, however, I recieved the following error: error: unknown filesystem. grub rescue> Hoping I could boot into Ubuntu via a pendrive and possibly backup my important files and wipe the hard drive to start fresh, I installed Ubuntu 13.04, but even that does not boot. Instead, I get this message on a terminal screen: SYSLINUX 4.06. EDD 4.06-pre1 Copyright (C) 1994-2011 H. Peter Anvin et al ERROR: No configuration file found No DEFAULT or UI configuration directive found! boot: So more or less, my desktop is screwed. I need to be able to get to the files inside because of my job as an artist, as well as retrieve my documents for my stories stored on Windows. Once I can succeed in solving this once and for all, I know for a fact I will stick to Ubuntu only, and install what is required to be able to run any Windows applications I used to use or need to use. I would rather not reformat the hard drive, and if I need to, it is a last resort. And I doubt I can use a Windows Recovery Disk to get my files back, as my mom has thrown out a lot of the installation disks and paperwork I would need to even follow through with that. :\ Keep in mind that I am a novice/newbie when it comes to Linux, but am hoping ot become better at it as time goes by. I appreciate any help you guys can give me. This will probably be the last time I attempt to do anything that could risk the well-being of my PC. (I've also looked through various questions on the site and tested a lot of the solutions. None seem to have worked.)

    Read the article

  • How To Completely Disable Subtitles in VLC

    - by The Geek
    If you watch a lot of videos using VLC, you might have noticed that it enables subtitles by default if they are there, which can be pretty annoying at times. Here’s the quick tip to disable them entirely. Of course, you can always turn them back on if you want on an individual video basis. Disable Subtitles Head into the VLC preferences, and then click the All button at the bottom of the screen. On the left-hand side, choose Video –> Subtitles/OSD, and then uncheck the boxes for “Autodetect subtitle files”, Enable sub-pictures, and On Screen Display. That should do it, unless the subtitles are forced in the video for some reason. Note: Certain video formats like MKV can sometimes have subtitles enabled even though there isn’t a separate subtitles file. This is why you need to remove “Enable sub-pictures” as well, which totally disables the on-screen text. You can choose to only uncheck the autodetecting of subtitles instead if you’d prefer. And of course, you can simply right-click on the video, head to Video –> Subtitles Track and then choose the subtitles if you still wanted them. Note: this only works if the “enable sub-pictures” option is still enabled. And thus ends the tale of disabling those fracking subtitles. Starbuck approves. Similar Articles Productive Geek Tips You Really Want to Completely Disable Tabs in Firefox?Disable ProFTP on CentOSDisable Notification Balloons in XPHow To (Really) Completely Disable UAC on Windows 7Disable User Account Control (UAC) the Easy Way on Win 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • How can I solve this SAT edge case?

    - by ssb
    I have an SAT implementation that basically works, and the fact that it works is what's giving me a few headaches. Basically there are some situations where using the SAT doesn't quite give me my intended result. One of these involves movement across multiple collision objects. Or to put it another way, if I have several collision boxes lined up next to each other such as to create something like a wall or a floor, movement along that surface while constantly applying force into that surface sometimes causes hangups, i.e. the player stops moving. This illustration shows what I mean: The 2 boxes on the bottom represent a floor, and the box on top/in the middle represents what my player is doing. There are several squares lined up as world obstacles to create some kind of wall, and if I move to the left across this surface while holding the down key then the issue arises. It only happens at the exact dividing point between two blocks. It only happens when moving to the left. At any rate I think I know why it happens, but I don't know how to solve it. Basically when I update my player movement I consider which directions are pressed, naturally, so if down is pressed I will add the speed to the Y component, and so on. But due to the way my SAT is implemented, when the penetration into the shape is the same from both sides it just goes with the smallest axis that it finds first, and it checks collisions against objects in the order that they were created because it goes through a foreach loop on the list of collidable objects. So this all adds up to the effect of if I'm moving to the left over a series of boxes while holding down, it will resolve me back to the right out of the first box and then up out of the box to the right of it, and this continues as long as the penetration is the same. The odd part is that this doesn't happen every time, which I am going to attribute to some oddity regarding multiplying velocity by the game time and causing some minor discrepancies between the lengths. Ultimately what this boils down to is that it will keep resolving me to the right and up, but this is technically expected behavior. All the solutions I can think of only address the symptoms of this problem and not the actual cause, such as not using many blocks to create walls or shapes, which is an option I'd like to keep open. I could also change which axis my algorithm defaults to, but that would just cause problems when going up/down along the walls. What can I do to fix this?

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • Microsoft Virtual Academy

    Carpe Diem It's been since a while that I could write an article for this blog but alas, I was (and still am) very busy with customer's work. Which is actually good. So, what is this article going to tell you? Well, in general, just what I already tweeted, that life is constant process of learning - especially as software craftsman. Due to an upcoming new customer project in ASP.NET I had to seize the opportunity to get my head deeper into latest available technologies, like Windows Azure and SQL Azure. I know... cloud computing and so on is not a recent development and already available since quite a while but I never any means to get myself into this since roughly two weeks ago. Microsoft Virtual Academy I can't remember exactly what guided me towards the Microsoft Virtual Academy (MVA), oh wait... Yes, it was a posting on Facebook from an old CLIP community friend. He posted a shortened URL with #MVA tag that caught my attention. Thanks for that Thomas Kuberek. After the usual sign in or registration via Live ID I was a little bit surprised that Mauritius is not an available country option... Quick mail exchange with the MVA Decan, and yeah, apologies for the missing entry. So, currently I'm learning about Microsoft products and services, and collecting points under "Not Listed Country" until Mauritius is going to be added. Hopefully soon, as MVA honors your effort with different knowledge ranks that are compared to other students with public profiles. I think it's a nice move to add some game and competition factor into the learning game. The tracks and their different modules are mainly references to publicly available material online, namely on either MSDN, TechNet, Channel9, or other Microsoft based sites. The course material therefore also varies in different media and formats, ranging from simple online articles over downloadable documents (.docx or .pdf) to Silverlight / Windows Media streams with download options. Self-assessment and students ranking Each module in a track can be finished by taking part in a self-assessment. Up to now, the assessment I did (and passed) were limited to 10 minutes available time, and consisted of six to seven questions on the module training material. Nothing too serious but it gives you a glimpse idea how Microsoft certification exams are structured. Conclusion Nothing really new but nicely gathered, assembled and presented to the MVA students. At the moment, I wouldn't dare to compare the richness and quality of those courses with professional training offers, like Pluralsight .NET Training, LearnDevNow, VTC, etc. at all, but I think that MVA has potential. Give it a try, and let me know about your opinions.

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

  • LightSwitch Tutorial - Adding Image to a LightSwitch Screen

    - by ChrisD
    Last week, I have discussed how to control Screen Layouts in LightSwitch. Now, I will talk about how to add an image to the LightSwitch screen. In this demo, I will try to upload the image to the screen and will save the image into the database. The first step we need to do is start the VS 2010, create LightSwitch Desktop application with the name “AddingImageIntoScreenInLSBeta2” as shown in the following figure. The second steps, create a table as shown in the screen by selecting the "create a table" option in the start up screen. Then, we need to add a New Data Screen to our demo application. See the following figure which is the default screen layouts for the screen we have created. So we have to change the layout of this screen so that the uploading and using the image in the screen can be easily explained. Before adding the Model Window we have to prepare the layout. So delete the Highlighted fields as shown in the above figure. After preparing the layout to add the image, just add a new Group to the Person Property Rows Layout. To add a new group, [No: 1] – Select the Rows layout, it will shows you the Add button. [No: 2] – Click the Add button to select the new group. [No: 3] – Select the New Group. After adding the new group change the Layout type to Columns Layout. Here, -          Change the rows layout to columns layouts and give the display name as Uploading Image Example. -          Click on the add button to add the Photo field under the column layout. Add a new group under the Column layout group. Follow the [No: #] to create a new group under the columns layout group. After adding a new group of rows layout add the fields to the newly created group. [No: 1] – Select the Rows Layout group and change the display name as Details. [No: 2] – Click on Add button to select the appropriate fields to add to the group. [No: 3] – Add the fields to the group The above snippet shows the complete layout tree for our screen. Now the screen for uploading the image is ready. Just press the Play button. And see the result.

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >