Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 77/410 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • PHP MYSQL loop to check if LicenseID Values are contained in mysql DB [closed]

    - by Jasper
    I have some troubles to find the right loop to check if some values are contained in mysql DB. I'm making a software and I want to add license ID. Each user has x keys to use. Now when the user start the client, it invokes a PHP page that check if the Key sent in the POST method is stored in DB or not. If that key isn't store than I need to check the number of his keys. If it's than X I'll ban him otherwise i add the new keys in the DB. I'm new with PHP and MYSQL. I wrote this code and I would know if I can improve it. <?php $user = POST METHOD $licenseID = POST METHOD $resultLic= mysql_query("SELECT id , idUser , idLicense FROM license WHERE idUser = '$user'") or die(mysql_error()); $resultNumber = mysql_num_rows($resultLic); $keyFound = '0'; // If keyfound is 1 the key is stored in DB while ($rows = mysql_fetch_array($resultLic,MYSQL_BOTH)) { //this loop check if the $licenseID is stored in DB or not for($i=0; $i< $resultNumber ; i++) { if($rows['idLicense'] === $licenseID) { //Just for the debug echo("License Found"); $keyFound = '1'; break; } //If key isn't in DB and there are less than 3 keys the new key will be store in DB if($keyfound == '0' && $resultNumber < 3) { mysql_query( Update users set ...Store $licenseID in Table) } // Else mean that the user want user another generated key (from the client) in the DB and i will be ban (It's wrote in TOS terms that they cant use the software on more than 3 different station) else { mysql_query( update users set ban ='1'.....etc ); } } ?> I know that this code seems really bad so i would know how i can improve it. Someone Could give me any advice? I choose to have 2 tables: users where all information about the users is, with fields id, username, password and another table license with fields id, idUsername, idLicense (the last one store license that the software generate)

    Read the article

  • Netbook performs hard shutdown without warning on low battery power

    - by Steve Kroon
    My Asus EEE netbook performs a hard shutdown when it reaches low battery power, without giving any warning - i.e. the power just goes off, without any shutdown process. I can't find anything in the syslog, and no error messages are printed before it happens. I've had this problem on previous (K)Ubuntu versions, and hoped updating to Ubuntu Precise would help resolve the issue, but it hasn't. The option in the Power application for "when power is critically low" is currently blank - the only options are a (grayed-out) hibernate and "Power off". I have re-installed indicator-power to no effect. The time remaining reported by acpi is unstable, as is the time remaining reported by gnome-power-statistics. (For example, running acpi twice in succession, I got 2h16min, and then 3h21min remaining. These sorts of jumps in the remaining time are also in the gnome-power-statistics graphs.) It might be possible to write a script to give me advance warning (as per @RanRag's comment below), but I would prefer to isolate why I don't get a critical battery notification from the system before this happens, so that I can take action as appropriate (suspend/shutdown/plug in power) when I get a notification. Some additional information on the battery: kroon@minia:~$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/PNP0C0A:00/power_supply/BAT0 vendor: ASUS model: 1005P power supply: yes updated: Fri Aug 17 07:31:23 2012 (9 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging energy: 33.966 Wh energy-empty: 0 Wh energy-full: 34.9272 Wh energy-full-design: 47.52 Wh energy-rate: 3.7692 W voltage: 12.61 V time to full: 15.3 minutes percentage: 97.248% capacity: 73.5% technology: lithium-ion History (charge): 1345181483 97.248 charging 1345181453 97.155 charging 1345181423 97.062 charging 1345181393 96.970 charging History (rate): 1345181483 3.769 charging 1345181453 3.899 charging 1345181423 4.061 charging 1345181393 4.201 charging kroon@minia:~$ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok charging state: charging present rate: 332 mA remaining capacity: 3149 mAh present voltage: 12612 mV kroon@minia:~$ cat /proc/acpi/battery/BAT0/info present: yes design capacity: 4400 mAh last full capacity: 3209 mAh battery technology: rechargeable design voltage: 10800 mV design capacity warning: 10 mAh design capacity low: 5 mAh cycle count: 0 capacity granularity 1: 44 mAh capacity granularity 2: 44 mAh model number: 1005P serial number: battery type: LION OEM info: ASUS

    Read the article

  • Execute a SSIS package in Sync or Async mode from SQL Server 2012

    - by Davide Mauri
    Today I had to schedule a package stored in the shiny new SSIS Catalog store that can be enabled with SQL Server 2012. (http://msdn.microsoft.com/en-us/library/hh479588(v=SQL.110).aspx) Once your packages are stored here, they will be executed using the new stored procedures created for this purpose. This is the script that will get executed if you try to execute your packages right from management studio or through a SQL Server Agent job, will be similar to the following: Declare @execution_id bigint EXEC [SSISDB].[catalog].[create_execution] @package_name='my_package.dtsx', @execution_id=@execution_id OUTPUT, @folder_name=N'BI', @project_name=N'DWH', @use32bitruntime=False, @reference_id=Null Select @execution_id DECLARE @var0 smallint = 1 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'LOGGING_LEVEL', @parameter_value=@var0 DECLARE @var1 bit = 0 EXEC [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'DUMP_ON_ERROR', @parameter_value=@var1 EXEC [SSISDB].[catalog].[start_execution] @execution_id GO The problem here is that the procedure will simply start the execution of the package and will return as soon as the package as been started…thus giving you the opportunity to execute packages asynchrously from your T-SQL code. This is just *great*, but what happens if I what to execute a package and WAIT for it to finish (and thus having a synchronous execution of it)? You have to be sure that you add the “SYNCHRONIZED” parameter to the package execution. Before the start_execution procedure: exec [SSISDB].[catalog].[set_execution_parameter_value] @execution_id,  @object_type=50, @parameter_name=N'SYNCHRONIZED', @parameter_value=1 And that’s it . PS From the RC0, the SYNCHRONIZED parameter is automatically added each time you schedule a package execution through the SQL Server Agent. If you’re using an external scheduler, just keep this post in mind .

    Read the article

  • mdadm: breaks boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Magento 1.6.2 Catalog Price Rule Problem

    - by robgt
    My Magento system seems to have a slight issue with Catalog Price Rule application. As far as the customer is concerned, all is working perfectly. The problem is that some orders are not being displayed properly in the admin system when I look at the details. The Catalog Price rule appears to not be applied - so when we reconcile our card processor details with those in our backend Sage system, numbers are not tallying up. Magento and out Sage system say the customer paid X, but the card issuer has taken payment of Y. The payment amount is correct due to the Catalog Price Rule. The customer is always paying the correct amount, but because of some issue with Magento, I think the data is possibly not being stored correctly (stored without the catalog price rule discount amount applied). This means that when I look at an order in the admin system, the line item prices that should be affected by the catalog price rule are not - but also the prices in our backend Sage system are incorrect too. We use another piece of software to bring the data into Sage from Magento, so the data must be stored in Magento's database incorrectly somewhere as this software reads out the order information from Magento. Does anyone have any idea what is wrong here, and how it might be fixed? Cheers!

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

  • How to fix boot and mount failed drops to initramfs prompt in Ubuntu 12.04?

    - by msPeachy
    Ubuntu partition does not boot. This started after a power interruption during system boot. The next time I boot, I encountered the following error message: mount: mounting /dev/disk/by-uuid/3f7f5cd9d-6ea3-4da7-b5ec-**** on /root failed: Invalid argument mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target file system doesn't have /sbin/init. No init found. Try passing init= bootarg. Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _ I've searched for similar posts here and most of the recommended solution is to reboot to the Ubuntu LiveCD. That's another problem because I cannot boot to a LIVEUSB, this is the error message I get when booting to a LiveUSB: Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) mount: mounting /dev/sda2 on /isodevice failed: Invalid argument Could not find the ISO /ubuntu-12.04-desktop-i386.iso. This could also happen if the file system is not clean because of an operating system crash, an interrupted boot process, an improper shutdown, or unplugging of a removable device without first unmounting or ejecting it. To fix this, simply reboot into Windows, let it fully start, log in, run 'chkdsk /r', then gracefully shut down and reboot back into Windows. After this you should be able to reboot again and resume the installation. I cannot boot into Windows because I don't have a Windows partition. Do I have to install Windows to fix this problem? Is there a way to fix this in the (initramfs) prompt? Please help. Thank you!

    Read the article

  • Nexus 7 Possibly Bricked

    - by user214186
    I have a 1st gen Nexus 7 (32GB). I used the steps at https://wiki.ubuntu.com/Nexus7/Installation to successfully install Ubuntu 13.04 desktop onto the tablet. It was working fine and then I decided to upgrade to Ubuntu Touch. I booted the tablet into fast boot mode but the commands 'adb devices' and 'sudo fastboot devices' would not see the device. I am performing these steps from an Ubuntu 12.04 desktop PC. Prior to installing 13.04 the device was seen fine. I made the mistake of performing the 'Device factory reset' step from https://wiki.ubuntu.com/Touch/Install - Step 2. Now when I try to boot the device I get the following: mount: mounting /dev on /root/dev failed: no such file or directory mount: mounting /dev on /root/sys failed: no such file or directory mount: mounting /proc on /root/proc failed: no such file or directory Targe filesystem doesn't have requested /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.20.2 (Ubuntu 1:1.20.0-0ubuntu1) built-in shell (ash) Enter help for a list of built-in commands. (initramfs) I have searched the web but every reference to this problem is from people who still have ADB access to the device so they can recover by flashing the tablet again. I can attach a keyboard to the USB port and access the BusyBox console but I don't know what steps to do to recover from my error. Any suggestions would be helpful. Thanks

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • Show USB drives in launcher, but not mounted internal partitions

    - by Gabriel
    Well the title pretty much says it all. I have partitions that appear in the launcher when the system mounts them, just like when a USB key is plugged in. I do not want these mounted internal hard disc partitions to show as icons in the launcher, but I do want my external USB to show there when I plug it in. I've tried MyUnity - it has only an option to not show/hide all mounted devices, which is not what I want. Can this be done? From /proc/mounts (in order seen in screenshot): /dev/sdb1 /media/CEDD-DE31 vfat rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0077,codepage=cp437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro 0 0 /dev/sda3 /media/A423-E0E8 vfat rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0077,codepage=cp437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro 0 0 /dev/sda5 /media/586C25656C253EDE fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 /dev/sda6 /home/greg/80gb ext4 rw,relatime,user_xattr,barrier=1,data=ordered 0 0 Other items from /proc/mounts not appearing in Unity launcher: /dev/sda1 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0 /dev/sda9 /mnt/backup ext4 rw,relatime,user_xattr,barrier=1,data=ordered 0 0

    Read the article

  • My laptop with Linux/ Ubuntu isn't working

    - by Andy Campos
    I have a dell laptop with ubuntu linux. A day I tried to start it up and a black screen just appeared that says: GNU GRUB version1.98+20100804-5ubuntu3 (and these clickable options:) -Ubuntu, with Linux 2.6.35-22-generic -Ubuntu, with Linux 2.6.35-22-generic (recovery mode) -Memory test (memtest86+) -Memory test (memtest86+, serial console 115200) When I click the first one, a bunch of text appears like: mount: mounting /dev/disk/by-uuid/8396a225... failed: invalid argument mount: mounting /dev on /root/dev failed: no such file or directory mount: mounting /sys on /root/sys failed: no such file or directory mount: mounting /proc on /root/proc failed: no such file or directory Target file system doesn't have requested /sbin/init No init found. Try passing init= bootarg Enter 'help' for a list of built-in commands BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell (ash) (initramfs) When I enter 'help' a bunch more incomprehensible text appears. Whenever I press the enter key all that pops up is (intetramfs) If anyone can make rhyme or reason out of this please, please help me out so it can boot up normally and i can be set. If there's some kind of special code I have to type in or something I know nothing about computers.

    Read the article

  • Can no longer boot with rEFIt and Grub on early 2006 MacBook Pro

    - by Don Quixote
    I don't know what happened to cause this. I have Snow Leopard, Ubuntu 11.04 Natty Narwhal and Windows XP SP3 on my early 2006 MacBook Pro. It is a Core Duo unit, NOT Core 2 Duo, so it is 32-bit only - Model Identifier MacBookPro1,1. I use rEFIt 0.14 for my boot menu. For some reason neither XP nor Ubuntu would boot anymore. I'd just get a black screen with a rapidly flashing underscore in the top-left corner. Having both those OSes failing to boot suggested a problem with the boot loader in my MBR. The rEFIT partition tool verified that my MBR partitions were still synced with my GPT partitions, so I rewrote my MBR partition table with fdisk while booted from Parted Magic: # fdisk /dev/sda (fdisk warns about the disk having a GPT. I press on anyway.) p (Print the existing partition table to make sure it's OK.) w (Write the old partition table back to disk. This also writes a new MBR boot loader.) After this XP would boot but Ubuntu would not, with the same symptom. Now I used update-grub while chrooted into Ubuntu from Parted Magic: # mount /dev/sda3 /mnt # mount --bind /dev /mnt/dev # mount --bind /sys /mnt/sys # mount --bind /proc /mnt/proc # chroot /mnt Chroot issues some warnings about not being able to identify some group IDs. I don't know why that happens, or whether it is a problem. At this point while I am still booted off of Parted Magic's kernel, I am running from Natty's filesystem. # update-grub Update-grub detects each of my operating systems then claims to complete successfully, but still won't boot. I asked this same question over at rEFIt's Sourceforge support forum but there have been no replies yet. I also Googled quite a bit, and see many who have the same black screen problem, but none of their situations seem quite like mine. Thanks for any help you can give me. -- Don Quixote

    Read the article

  • How to restore Windows7 after restore ubuntu bootloader?

    - by Mateusz Rogulski
    At first I will describe my situation in a few points: I have installed Windows7, and then Ubuntu 11.04 on my machine. Then everything works fine and at start of system I have screen from linux where I can choose the system. Then I reinstall Windows7 and install Windows 8 on other partition. Then I can choose between Win7 and win8 when I start system. Then I need my Ubuntu back so I want restore my bootloader from Ubuntu. I boot Ubuntu from USB and in terminal write this commands: sudo fdisk -l Then I get: /dev/sda1 1 13 104391 de Dell Utility /dev/sda2 14 2805 22425601 5 Rozszerzona /dev/sda3 * 2805 41968 314572800 7 HPFS/NTFS /dev/sda4 41968 60802 151282688 7 HPFS/NTFS /dev/sda5 14 2445 19530752 83 Linux /dev/sda6 2445 2805 2893824 82 Linux swap / Solaris Next commands: sudo mount /dev/sda5 /mnt sudo mount --bind /dev /mnt/dev sudo mount --bind /proc /mnt/proc sudo chroot /mnt grub-install /dev/sda I get Installation finished. No error reported.. And when I start my machine I have old Ubuntu start screen to choose system. Ubuntu works well. But There are no Windows 8 option. But my primary problem is when I choose Windows 7 I have: error: no such device ... error: no such disk so I have no idea what can I do. I really need both systems to work. Any help would be appreciated.

    Read the article

  • Session serialization in JavaEE environment

    - by Ionut
    Please consider the following scenario: We are working on a JavaEE project for which the scalability starts to become an issue. Up until now, we were able to scale up but this is no longer an option. Therefore we need to consider scaling out and preparing the App for a clustered environment. Our main concern right now is serializing the user sessions. Sadly, we did not consider from the beginning the issue and we are encountering the following excetion: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: org.apache.catalina.session.StandardSessionFacade I did some research and this exception is thrown because there are objects stored on the session which does not implement the Serializable interface. Considering that all over the app there are quite a few custom objects which are stored on the session without implementing this interface, it would require a lot of tedious work and dedication to fix all these classes declaration. We will fix all this declarations but the main concern is that, in the future, there may be a developer which will add a non Serializable object on the session and break the session serialization & replication over multiple nodes. As a quick overview of the project, we are developing using a home grown framework based on Struts 1 with the Servlet 3.0 API. This means that at this point, we are using the standard session.getAttribute() and session.setAttribute() to work with the session and the session handling is scattered all over the code base. Besides updating the classes of the objects stored on session and making sure that they implement the Serializable interface, what other measures of precaution should we take in order to ensure a reliable Session replication capability on the Application layer? I know it is a little bit late to consider this but what would be the best practice in this case? Furthermore, are there any other issues we should consider regarding this transition? Thank you in advance!

    Read the article

  • mdadm: brakes boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Where to put business logic in MVC design?

    - by BriskLabs Pakistan
    I have created a simple MVC java application that adds records through data forms to a database. my app collects data, it also validates it and stores it. This is because the data is being sourced online from different users. the data is mostly numeric in nature. now on the numeric data being stored into database (SQL server) , i wish that my app should be able to perform computations... and display it. the user is not interested in how computations are done so they must be encapsulated. the user must only be able to view the simple computed data which for example A column data - B Column data / C column data etc... and just display it to the user... i know how to write stored procedures for same but i want a 3 tier app I want the data, that I put into the database as a record, worked upon by performing calculations on it. However, the original data should remain unaffected, while the new data, post-calculations, must be stored as a new entity record into the database. Where should I write the code for this background calculation? As it is the rules and business logic... in a new java beans files ?

    Read the article

  • Battery not recognized on my laptop (and it recognizes my laptop as a desktop)

    - by AZorin
    I have installed Ubuntu (both 10.10 and 11.04 pre-release) on my laptop but my battery is not recognized and it is detected as a desktop system rather than a laptop. I have tried to get the output of cat /proc/acpi/battery/BAT1/state but the directory doesn't exist. I have tried another guide to paste the battery info into this directory but it doesn't allow me to do that and says that the directory doesn't exist, even though I'm trying to make it. I tried it in root Nautilus and even on an install of Lubuntu (with a root file manager) but it still failed to budge. I really don't know what to do as I have tried all the guides on the internet that I could find. Is there any way to change the configuration file(s) that detect the internal hardware of the computer. The /proc directory is a temporary RAM directory afaik. Is there a directory where that data is stored permanently and where the RAM reads if you know what I mean? Thanks in advance. AZorin This issue has been reported as bug #764513.

    Read the article

  • How do I increase the open files limit for a non-root user?

    - by iCode
    This is happening on Ubuntu Release 12.04 (precise) 64-bit Kernel Linux 3.2.0-25-virtual I'm trying to increase the number of open files allowed for a user. This is for an my ecplise java application where the current limit of 1024 is not enough. According to the posts I've found so far, I should be able to put lines into /etc/security/limits.conf like this; soft nofile 4096 hard nofile 4096 to increase the number of open files allowed for all users. But, that's not working for me, and I think the problem is not related to that file. For all users, the default limit is 1024, regardless of what is in /etc/security/limits.conf (I have been rebooting after changing that file) $ ulimit -n 1024 Now, despite the entries in /etc/security/limits.conf I can't increase that; $ ulimit -n 2048 -bash: ulimit: open files: cannot modify limit: Operation not permitted The weird part is that I can change the limit downwards, but can't change it upwards - even to go back to a number which is below the original limit; $ ulimit -n 800 $ ulimit -n 800 $ ulimit -n 900 -bash: ulimit: open files: cannot modify limit: Operation not permitted As root, I can change that limit to whatever I want, up or down. It doesn't even seem to care about the supposedly system-wide limit in /proc/sys/fs/file-max # cat /proc/sys/fs/file-max 188897 # ulimit -n 188898 # ulimit -n 188898 So far, I haven't found any way to increase the open files limit for a non-root user, and I really don't want to be running my application as root. How should I properly do this? I have looked at all the posted and tried the given options but no luck!

    Read the article

  • Why is my query soooooo slow?

    - by geekrutherford
    A stored procedure used in our production environment recently became so slow it cause the calling web service to begin timing out. When running the stored procedure in Query Analyzer it took nearly 3 minutes to complete.   The stored procedure itself does little more than create a small bit of dynamic SQL which calls a view with a where clause at the end.   At first the thought was that the query used within the view needed to be optimized. The query is quite long and therefore easy to jump to this conclusion.   Fortunately, after bringing the issue to the attention of a coworker they asked "is there a where clause, and if so, is there an index on the column(s) in it?" I had no idea and quickly said as much. A quick check on the table/column utilized in the where clause indicated indeed there was no index.   Before adding the index, and after admitting I am no SQL wiz, I checked the internet for info on the difference between clustered and non-clustered indexes. I found the following site quite helpful OdeToCode. After adding the non-clustered index on the column, the query that used to take nearly 3 minutes now takes 10 seconds! Ah, if only I'd thought to do this ahead of time!

    Read the article

  • File doesn't exist when trying to change permissions following the avasys image scan manual

    - by Howard Graham
    I was finally able to connect to avasys.jp and downloaded and installed iscan_2.28.1-3.ltdl7_amd64.deb iscan-data_1.13.0-1_all.deb. The programs appeared to install correctly. I then ran sane-find-scanner and got back: found USB scanner (vendor=0x04b8, product=0x012d) at libusb:001:003 I then ran lsusb and got back: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 04b8:012d Seiko Epson Corp. Perfection V10/V100 (GT-S600/F650) Bus 001 Device 004: ID 03f0:4817 Hewlett-Packard Bus 002 Device 002: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse the avasys image scan manual instructed me to run chmod 0666 /proc/bus/usb/001/003 which returned chmod: cannot access `/proc/bus/usb/001/003': No such file or directory In 12.04, no such directory exists. 12.04 appears to deal with USB in another way. What must I do to get the usb port 001/003 recognized by xsane and sane as the port where the scanner can be located? What must I do to continue installing the scanner?

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • Not enough disk space '/' in AWS instance

    - by Sumant
    i am running Ubuntu 11.04 instance for my Web Server on AWS cloud, now i am getting there is no disk space in / partition of my server. df -ah say this Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 7.8G 97M 99% / proc 0 0 0 - /proc none 0 0 0 - /sys fusectl 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security none 3.7G 112K 3.7G 1% /dev none 0 0 0 - /dev/pts none 3.7G 0 3.7G 0% /dev/shm none 3.7G 80K 3.7G 1% /var/run none 3.7G 0 3.7G 0% /var/lock /dev/xvdb 414G 16G 377G 4% /mnt Now i have Tried these thing for getting some extra space on / partition Clean up All Log files for Apache. Removed all unnecessary files from server. Home directory Cleanup. But Still I am not getting enough space. This Instance type is m1.large with 8GB EBS. Now i am getting i have enough disk space in /dev/xvdb. Is there a way i can allocate some diskspace to / from /dev/xvdb or Any other Ways. Please suggest me the possible solution for this.Is it possible to use the same /dev/xvdb partition with another instance.

    Read the article

  • Awesome and LXDE desktop managers messed up KDE

    - by Caleb1994
    I saw a desktop manager named "Awesome" earlier on Google+, and thought I'd give it a try. In short, I didn't like it, but it got me wondering what other desktops were like, which I hadn't tried. The first one to come to mind was LXDE. I installed that, and tried it. I wasn't a big fan, so I just went to log back into KDE. Only, when I log in, everything is screwy. My theme is weird (although, according to system settings, it is still the same). All the categories and application short cuts in the KMenu are gone, except my favorites, which are now renamed with the "short name", it seems. I know these things are global resources, so it is very likely that one of these Window Managers screwed it up, but I need it fixed. Actually, it seems that after restarting the theme problem fixed itself, but the KMenu items disappearing is still a problem. Does anyone know where these items are stored? (I know they are are just .desktop files somewhere, IIRC, but I don't remember where they are usually stored so I can see if they are still there). I'm hoping it's just a matter of a broken link or something somewhere, not deleted shortcuts... :/ In summary: Any ideas on what caused this? Do you know how to fix the KMenu, or at least where the .desktop shortcuts are stored for the KMenu so I can see if they still exist (crosses fingers).

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >