Search Results

Search found 25113 results on 1005 pages for 'grouped table'.

Page 292/1005 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Diving into OpenStack Network Architecture - Part 2 - Basic Use Cases

    - by Ronen Kofman
      rkofman Normal rkofman 4 138 2014-06-05T03:38:00Z 2014-06-05T05:04:00Z 3 2735 15596 Oracle Corporation 129 36 18295 12.00 Clean Clean false false false false EN-US X-NONE HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} In the previous post we reviewed several network components including Open vSwitch, Network Namespaces, Linux Bridges and veth pairs. In this post we will take three simple use cases and see how those basic components come together to create a complete SDN solution in OpenStack. With those three use cases we will review almost the entire network setup and see how all the pieces work together. The use cases we will use are: 1.       Create network – what happens when we create network and how can we create multiple isolated networks 2.       Launch a VM – once we have networks we can launch VMs and connect them to networks. 3.       DHCP request from a VM – OpenStack can automatically assign IP addresses to VMs. This is done through local DHCP service controlled by OpenStack Neutron. We will see how this service runs and how does a DHCP request and response look like. In this post we will show connectivity, we will see how packets get from point A to point B. We first focus on how a configured deployment looks like and only later we will discuss how and when the configuration is created. Personally I found it very valuable to see the actual interfaces and how they connect to each other through examples and hands on experiments. After the end game is clear and we know how the connectivity works, in a later post, we will take a step back and explain how Neutron configures the components to be able to provide such connectivity.  We are going to get pretty technical shortly and I recommend trying these examples on your own deployment or using the Oracle OpenStack Tech Preview. Understanding these three use cases thoroughly and how to look at them will be very helpful when trying to debug a deployment in case something does not work. Use case #1: Create Network Create network is a simple operation it can be performed from the GUI or command line. When we create a network in OpenStack the network is only available to the tenant who created it or it could be defined as “shared” and then it can be used by all tenants. A network can have multiple subnets but for this demonstration purpose and for simplicity we will assume that each network has exactly one subnet. Creating a network from the command line will look like this: # neutron net-create net1 Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | 5f833617-6179-4797-b7c0-7d420d84040c | | name                      | net1                                 | | provider:network_type     | vlan                                 | | provider:physical_network | default                              | | provider:segmentation_id  | 1000                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | 9796e5145ee546508939cd49ad59d51f     | +---------------------------+--------------------------------------+ Creating a subnet for this network will look like this: # neutron subnet-create net1 10.10.10.0/24 Created a new subnet: +------------------+------------------------------------------------+ | Field            | Value                                          | +------------------+------------------------------------------------+ | allocation_pools | {"start": "10.10.10.2", "end": "10.10.10.254"} | | cidr             | 10.10.10.0/24                                  | | dns_nameservers  |                                                | | enable_dhcp      | True                                           | | gateway_ip       | 10.10.10.1                                     | | host_routes      |                                                | | id               | 2d7a0a58-0674-439a-ad23-d6471aaae9bc           | | ip_version       | 4                                              | | name             |                                                | | network_id       | 5f833617-6179-4797-b7c0-7d420d84040c           | | tenant_id        | 9796e5145ee546508939cd49ad59d51f               | +------------------+------------------------------------------------+ We now have a network and a subnet, on the network topology view this looks like this: Now let’s dive in and see what happened under the hood. Looking at the control node we will discover that a new namespace was created: # ip netns list qdhcp-5f833617-6179-4797-b7c0-7d420d84040c   The name of the namespace is qdhcp-<network id> (see above), let’s look into the namespace and see what’s in it: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00     inet 127.0.0.1/8 scope host lo     inet6 ::1/128 scope host        valid_lft forever preferred_lft forever 12: tap26c9b807-7c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN     link/ether fa:16:3e:1d:5c:81 brd ff:ff:ff:ff:ff:ff     inet 10.10.10.3/24 brd 10.10.10.255 scope global tap26c9b807-7c     inet6 fe80::f816:3eff:fe1d:5c81/64 scope link        valid_lft forever preferred_lft forever   We see two interfaces in the namespace, one is the loopback and the other one is an interface called “tap26c9b807-7c”. This interface has the IP address of 10.10.10.3 and it will also serve dhcp requests in a way we will see later. Let’s trace the connectivity of the “tap26c9b807-7c” interface from the namespace.  First stop is OVS, we see that the interface connects to bridge  “br-int” on OVS: # ovs-vsctl show 8a069c7c-ea05-4375-93e2-b9fc9e4b3ca1     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-ex         Port br-ex             Interface br-ex                 type: internal     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port "tap26c9b807-7c"             tag: 1             Interface "tap26c9b807-7c"                 type: internal         Port br-int             Interface br-int                 type: internal     ovs_version: "1.11.0"   In the picture above we have a veth pair which has two ends called “int-br-eth2” and "phy-br-eth2", this veth pair is used to connect two bridge in OVS "br-eth2" and "br-int". In the previous post we explained how to check the veth connectivity using the ethtool command. It shows that the two are indeed a pair: # ethtool -S int-br-eth2 NIC statistics:      peer_ifindex: 10 . .   #ip link . . 10: phy-br-eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 . . Note that “phy-br-eth2” is connected to a bridge called "br-eth2" and one of this bridge's interfaces is the physical link eth2. This means that the network which we have just created has created a namespace which is connected to the physical interface eth2. eth2 is the “VM network” the physical interface where all the virtual machines connect to where all the VMs are connected. About network isolation: OpenStack supports creation of multiple isolated networks and can use several mechanisms to isolate the networks from one another. The isolation mechanism can be VLANs, VxLANs or GRE tunnels, this is configured as part of the initial setup in our deployment we use VLANs. When using VLAN tagging as an isolation mechanism a VLAN tag is allocated by Neutron from a pre-defined VLAN tags pool and assigned to the newly created network. By provisioning VLAN tags to the networks Neutron allows creation of multiple isolated networks on the same physical link.  The big difference between this and other platforms is that the user does not have to deal with allocating and managing VLANs to networks. The VLAN allocation and provisioning is handled by Neutron which keeps track of the VLAN tags, and responsible for allocating and reclaiming VLAN tags. In the example above net1 has the VLAN tag 1000, this means that whenever a VM is created and connected to this network the packets from that VM will have to be tagged with VLAN tag 1000 to go on this particular network. This is true for namespace as well, if we would like to connect a namespace to a particular network we have to make sure that the packets to and from the namespace are correctly tagged when they reach the VM network. In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2: #  ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL  cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop  cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL   For packets coming from the interface to the namespace we see the following modification: #  ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):  cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL  cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop  cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL   To summarize we can see that when a user creates a network Neutron creates a namespace and this namespace is connected through OVS to the “VM network”. OVS also takes care of tagging the packets from the namespace to the VM network with the correct VLAN tag and knows to modify the VLAN for packets coming from VM network to the namespace. Now let’s see what happens when a VM is launched and how it is connected to the “VM network”. Use case #2: Launch a VM Launching a VM can be done from Horizon or from the command line this is how we do it from Horizon: Attach the network: And Launch Once the virtual machine is up and running we can see the associated IP using the nova list command : # nova list +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | ID                                   | Name         | Status | Task State | Power State | Networks        | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ | 3707ac87-4f5d-4349-b7ed-3a673f55e5e1 | Oracle Linux | ACTIVE | None       | Running     | net1=10.10.10.2 | +--------------------------------------+--------------+--------+------------+-------------+-----------------+ The nova list command shows us that the VM is running and that the IP 10.10.10.2 is assigned to this VM. Let’s trace the connectivity from the VM to VM network on eth2 starting with the VM definition file. The configuration files of the VM including the virtual disk(s), in case of ephemeral storage, are stored on the compute node at/var/lib/nova/instances/<instance-id>/. Looking into the VM definition file ,libvirt.xml,  we see that the VM is connected to an interface called “tap53903a95-82” which is connected to a Linux bridge called “qbr53903a95-82”: <interface type="bridge">       <mac address="fa:16:3e:fe:c7:87"/>       <source bridge="qbr53903a95-82"/>       <target dev="tap53903a95-82"/>     </interface>   Looking at the bridge using the brctl show command we see this: # brctl show bridge name     bridge id               STP enabled     interfaces qbr53903a95-82          8000.7e7f3282b836       no              qvb53903a95-82                                                         tap53903a95-82    The bridge has two interfaces, one connected to the VM (“tap53903a95-82 “) and another one ( “qvb53903a95-82”) connected to “br-int” bridge on OVS: # ovs-vsctl show 83c42f80-77e9-46c8-8560-7697d76de51c     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2"                 type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"     Bridge br-int         Port br-int             Interface br-int                 type: internal         Port "int-br-eth2"             Interface "int-br-eth2"         Port "qvo53903a95-82"             tag: 3             Interface "qvo53903a95-82"     ovs_version: "1.11.0"   As we showed earlier “br-int” is connected to “br-eth2” on OVS using the veth pair int-br-eth2,phy-br-eth2 and br-eth2 is connected to the physical interface eth2. The whole flow end to end looks like this: VM è tap53903a95-82 (virtual interface)è qbr53903a95-82 (Linux bridge) è qvb53903a95-82 (interface connected from Linux bridge to OVS bridge br-int) è int-br-eth2 (veth one end) è phy-br-eth2 (veth the other end) è eth2 physical interface. The purpose of the Linux Bridge connecting to the VM is to allow security group enforcement with iptables. Security groups are enforced at the edge point which are the interface of the VM, since iptables nnot be applied to OVS bridges we use Linux bridge to apply them. In the future we hope to see this Linux Bridge going away rules.  VLAN tags: As we discussed in the first use case net1 is using VLAN tag 1000, looking at OVS above we see that qvo41f1ebcf-7c is tagged with VLAN tag 3. The modification from VLAN tag 3 to 1000 as we go to the physical network is done by OVS  as part of the packet flow of br-eth2 in the same way we showed before. To summarize, when a VM is launched it is connected to the VM network through a chain of elements as described here. During the packet from VM to the network and back the VLAN tag is modified. Use case #3: Serving a DHCP request coming from the virtual machine In the previous use cases we have shown that both the namespace called dhcp-<some id> and the VM end up connecting to the physical interface eth2  on their respective nodes, both will tag their packets with VLAN tag 1000.We saw that the namespace has an interface with IP of 10.10.10.3. Since the VM and the namespace are connected to each other and have interfaces on the same subnet they can ping each other, in this picture we see a ping from the VM which was assigned 10.10.10.2 to the namespace: The fact that they are connected and can ping each other can become very handy when something doesn’t work right and we need to isolate the problem. In such case knowing that we should be able to ping from the VM to the namespace and back can be used to trace the disconnect using tcpdump or other monitoring tools. To serve DHCP requests coming from VMs on the network Neutron uses a Linux tool called “dnsmasq”,this is a lightweight DNS and DHCP service you can read more about it here. If we look at the dnsmasq on the control node with the ps command we see this: dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap26c9b807-7c --except-interface=lo --pid-file=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host --dhcp-optsfile=/var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/opts --leasefile-ro --dhcp-range=tag0,10.10.10.0,static,120s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal The service connects to the tap interface in the namespace (“--interface=tap26c9b807-7c”), If we look at the hosts file we see this: # cat  /var/lib/neutron/dhcp/5f833617-6179-4797-b7c0-7d420d84040c/host fa:16:3e:fe:c7:87,host-10-10-10-2.openstacklocal,10.10.10.2   If you look at the console output above you can see the MAC address fa:16:3e:fe:c7:87 which is the VM MAC. This MAC address is mapped to IP 10.10.10.2 and so when a DHCP request comes with this MAC dnsmasq will return the 10.10.10.2.If we look into the namespace at the time we initiate a DHCP request from the VM (this can be done by simply restarting the network service in the VM) we see the following: # ip netns exec qdhcp-5f833617-6179-4797-b7c0-7d420d84040c tcpdump -n 19:27:12.191280 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:fe:c7:87, length 310 19:27:12.191666 IP 10.10.10.3.bootps > 10.10.10.2.bootpc: BOOTP/DHCP, Reply, length 325   To summarize, the DHCP service is handled by dnsmasq which is configured by Neutron to listen to the interface in the DHCP namespace. Neutron also configures dnsmasq with the combination of MAC and IP so when a DHCP request comes along it will receive the assigned IP. Summary In this post we relied on the components described in the previous post and saw how network connectivity is achieved using three simple use cases. These use cases gave a good view of the entire network stack and helped understand how an end to end connection is being made between a VM on a compute node and the DHCP namespace on the control node. One conclusion we can draw from what we saw here is that if we launch a VM and it is able to perform a DHCP request and receive a correct IP then there is reason to believe that the network is working as expected. We saw that a packet has to travel through a long list of components before reaching its destination and if it has done so successfully this means that many components are functioning properly. In the next post we will look at some more sophisticated services Neutron supports and see how they work. We will see that while there are some more components involved for the most part the concepts are the same. @RonenKofman

    Read the article

  • cant populate cells with an array when i have loaded a second UITableViewController

    - by richard Stephenson
    hi there, im very new to iphone programming, im creating my first app, (a world cup one) the first view is a table view. the cell text label is filled with an array, so it shows all the groups (group a, B, c,ect) then when you select a group, it pulls on another UITableViewcontroller, but whatever i do i cant set the text label of the cells (e.g france,mexico,south africa, etc. infact nothin i do to the cellForRowAtIndexPath makes a difference , could someone tell me what im doing wrong please Thanks `here is my code for the view controller #import "GroupADetailViewController.h" @implementation GroupADetailViewController @synthesize groupLabel = _groupLabel; @synthesize groupADetail = _groupADetail; @synthesize teamsInGroupA; #pragma mark Memory management - (void)dealloc { [_groupADetail release]; [_groupLabel release]; [super dealloc]; } #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; // Set the number label to show the number data teamsInGroupA = [[NSArray alloc]initWithObjects:@"France",@"Mexico",@"Uruguay",@"South Africa",nil]; NSLog(@"loaded"); // Set the title to also show the number data [[self navigationItem]setTitle:@"Group A"]; //[[self navigationItem]cell.textLabel.text:@"test"]; //[[self navigationItem] setTitle[NSString String } - (void)viewDidUnload { [self setgroupLabel:nil]; } #pragma mark Table view methods - (NSInteger)numberOfSectionsInTableView:(UITableView*)tableView { // Return the number of sections in the table view return 1; } - (NSInteger)tableView:(UITableView*)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in a specific section // Since we only have one section, just return the number of rows in the table return 4; NSLog:("count is %d",[teamsInGroupA count]); } - (UITableViewCell*)tableView:(UITableView*)tableView cellForRowAtIndexPath:(NSIndexPath*)indexPath { static NSString *cellIdentifier2 = @"Cell2"; // Reuse an existing cell if one is available for reuse UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier2]; // If no cell was available, create a new one if (cell == nil) { NSLog(@"no cell, creating"); cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:cellIdentifier2] autorelease]; [cell setAccessoryType:UITableViewCellAccessoryDisclosureIndicator]; } NSLog(@"cell already there"); // Configure the cell to show the data for this row //[[cell textLabel]setText:[NSString string //[[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; //NSUInteger row = [indexPath row]; //[cell setText:[[teamsInGroupA objectAtIndex:indexPath:row]retain]]; //cell.textLabel.text:@"Test" [[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; return cell; } @end #import "GroupADetailViewController.h" @implementation GroupADetailViewController @synthesize groupLabel = _groupLabel; @synthesize groupADetail = _groupADetail; @synthesize teamsInGroupA; #pragma mark Memory management - (void)dealloc { [_groupADetail release]; [_groupLabel release]; [super dealloc]; } #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; // Set the number label to show the number data teamsInGroupA = [[NSArray alloc]initWithObjects:@"France",@"Mexico",@"Uruguay",@"South Africa",nil]; NSLog(@"loaded"); // Set the title to also show the number data [[self navigationItem]setTitle:@"Group A"]; //[[self navigationItem]cell.textLabel.text:@"test"]; //[[self navigationItem] setTitle[NSString String } - (void)viewDidUnload { [self setgroupLabel:nil]; } #pragma mark Table view methods - (NSInteger)numberOfSectionsInTableView:(UITableView*)tableView { // Return the number of sections in the table view return 1; } - (NSInteger)tableView:(UITableView*)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in a specific section // Since we only have one section, just return the number of rows in the table return 4; NSLog:("count is %d",[teamsInGroupA count]); } - (UITableViewCell*)tableView:(UITableView*)tableView cellForRowAtIndexPath:(NSIndexPath*)indexPath { static NSString *cellIdentifier2 = @"Cell2"; // Reuse an existing cell if one is available for reuse UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier2]; // If no cell was available, create a new one if (cell == nil) { NSLog(@"no cell, creating"); cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:cellIdentifier2] autorelease]; [cell setAccessoryType:UITableViewCellAccessoryDisclosureIndicator]; } NSLog(@"cell already there"); // Configure the cell to show the data for this row //[[cell textLabel]setText:[NSString string //[[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; //NSUInteger row = [indexPath row]; //[cell setText:[[teamsInGroupA objectAtIndex:indexPath:row]retain]]; //cell.textLabel.text:@"Test" [[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; return cell; } @end

    Read the article

  • Thread 1: Program received signal:"Sigbart"

    - by user813678
    When i try to run my app on my iPhone, I get " Thread 1: Program received signal:"Sigbart" xCode say that points to [self.navigationController pushViewController:detailViewController animated:YES]; import "RootViewController.h" import "global.h" import "golfbaner.h" @implementation RootViewController @synthesize banenavn; (void)viewDidLoad { [super viewDidLoad]; NSArray *temp = [[NSArray alloc] initWithObjects: @"Alsten Golfklubb", @"Arendal og Omegn Golfklubb", @"Asker Golfklubb", @"Askim Golfklubb", @"Atlungstad Golfklubb", @"Aurskog Golfpark", @"Ballerud Golfklubb", @"Bamble Golfklubb", @"Bergen Golfklubb", @"Bjorli Golfklubb", @"Bjørnefjorden Golfklubb", @"Bjaavann Golfklubb", @"Bodø Golfbane", @"Borre Golfbane", @"Borregaard Golfklubb", @"Brønnøysund Golfklubb", @"Byneset Golfklubb", @"Bærum Golfklubb", @"Drammen Golfklubb", @"Drøbak Golfklubb", @"Egersund Golfklubb", @"Eidskog Golfklubb", @"Eiker Golfklubb", @"Ekholt Golfklubb", @"Elverum Golfklubb", @"Fana Golfklubb", @"Fet Golfklubb", @"Frosta Golfklubb", @"Geilo Golfklubb", @"Giske Golfklubb", @"Gjerdrum Golfpark", @"Gjersjøen Golfklubb", @"Gjøvik og Toten Golfklubb", @"Gran Golfklubb", @"Grenland Golfklubb", @"Grimstad Golfklubb", @"Grini Golfklubb", @"Groruddalen Golfklubb", @"Grønmo Golfklubb", @"Hafjell Golfklubb", @"Haga Golfpark", @"Hakadal Golfklubb", @"Halden Golfklubb", @"Hallingdal Golfklubb", @"Hammerfest og Kvalsund Golfklubb", @"Hardanger Golfklubb", @"Harstad Golfklubb", @"Haugaland Golfklubb", @"Hauger Golf", @"Haugesund Golfklubb", @"Helgeland Golfklubb", @"Hemsedal Golfklubb", @"Herdla Golfklubb", @"Hitra Golfklubb", @"Hof Golfklubb", @"Holtsmark Golfklubb", @"Hovden Golfklubb", @"Hurum Golfklubb", @"Huseby og Hankø Golfklubb", @"Hvaler Golfklubb", @"Hvam Golfklubb", @"Jæren Golfklubb", @"Karasjok Golfklubb", @"Karmøy Golfklubb", @"Kjekstad Golfklubb", @"Klæbu Golfklubb", @"Kongsberg Golfklubb", @"Kongsvinger Golfklubb", @"Kragerø Golfklubb", @"Kristiansand Golfklubb", @"Kristiansund og Omegn Golfklubb", @"Krokhol Golfklubb", @"Kvinesdal og Omegn Golfklubb", @"Kvinnherad Golfklubb", @"Kvitfjell", @"Larvik Golfklubb", @"Lillehammer Golf Park", @"Lillestrøm Golfklubb", @"Lofoten Golf Links", @"Lommedalen Golfklubb", @"Losby Golfklubb", @"Lærdal Golfklubb", @"Lønne Golfklubb", @"Mandal Golfklubb", @"Meland Golfklubb", @"Midt-Troms Golfklubb", @"Miklagard Golfklubb", @"Mjøsen Golfklubb", @"Moa Golfklubb", @"Modum Golfklubb", @"Molde Golfklubb", @"Moss og Rygge Golfklubb", @"Mørk Golfklubb", @"Namdal Golfklubb", @"Namsos Golfklubb", @"Narvik Golfklubb", @"Nes Golfklubb", @"Nittedal Golfklubb", @"Nordfjord Golfklubb", @"Nordvegen Golfklubb", @"Norefjell Golfklubb", @"Norsjø og Omegn Golfklubb", @"North Cape Golf Club", @"Nærøysund Golfklubb", @"Nøtterøy Golfklubb", @"Odda Golfklubb", @"Ogna Golfklubb", @"Onsøy Golfklubb", @"Oppdal Golfklubb", @"Oppegård Golfklubb", @"Oslo Golfklubb", @"Oustøen Country Club", @"Polarsirkelen Golf", @"Preikestolen Golfklubb", @"Randaberg Golfklubb", @"Randsfjorden Golfklubb", @"Rauma Golfklubb", @"Re Golfklubb", @"Ringerike Golfklubb", @"Rygge Flystasjon Golf Club", @"Røros Golfklubb", @"Salten Golfklubb", @"Sandane Golfklubb", @"Sande Golfklubb", @"Sandefjord Golfklubb", @"Sandnes Golfklubb", @"Sauda Golfklubb", @"Selbu Golfklubb", @"Selje Golfklubb", @"Setesdal Golfklubb", @"Skei Golfklubb", @"Ski Golfklubb", @"Skjeberg Golfklubb", @"Smøla Golfklubb", @"Sola Golfklubb", @"Solastranden Golfklubb", @"Solum Golfklubb", @"Soon Golfklubb", @"Sorknes Golfklubb", @"Sotra Golfklubb", @"Stavanger Golfklubb", @"Steinkjer Golfklubb", @"Stiklestad Golfklubb", @"Stjørdal Golfklubb", @"Stord Golfklubb", @"Stranda Golfklubb", @"Stryn Golfklubb", @"Sunndal Golfklubb", @"Sunnfjord Golfklubb", @"Sunnmøre Golfklubb", @"Surnadal Golfklubb", @"Tjøme Golfklubb", @"Tromsø Golfklubb", @"Trondheim Golfklubb", @"Trysil Golfklubb", @"Tyrifjord Golfklubb", @"Ullensaker Golfklubb", @"Valdres Golfklubb", @"Vanylven Golfklubb", @"Varanger Golfklubb", @"Vesterålen Golfklubb", @"Vestfold Golfklubb", @"Vildmarken Golfklubb", @"Volda Golfklubb", @"Voss Golfklubb", @"Vrådal Golfklubb", @"Østmarka Golfklubb", @"Øya Golfpark", @"Ålesund Golfklubb", nil]; self.banenavn = temp; [temp release]; self.title = @"Golfbaner i Norge"; self.navigationController.navigationBar.barStyle = UIBarStyleBlackTranslucent; } (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations. return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ // Customize the number of sections in the table view. - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [banenavn count]; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text = [banenavn objectAtIndex:indexPath.row]; return cell; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source. [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view. } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { golf = [banenavn objectAtIndex:indexPath.row]; golfbaner *detailViewController = [[golfbaner alloc] initWithNibName:@"Golfbaner" bundle:nil]; [self.navigationController pushViewController:detailViewController animated:YES]; [detailViewController release]; } (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc that aren't in use. } (void)viewDidUnload { [super viewDidUnload]; // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } (void)dealloc { [super dealloc]; } @end

    Read the article

  • ASP.Net repeater item.DataItem is null

    - by mattgcon
    Within a webpage, upon loading, I fill a dataset with two table with a relation between those tables and then load the data into a repeater with a nested repeater. This can also occur after the user clicks on a button. The data gets loaded from a SQL database and the repeater datasource is set to the dataset after a postback. However, when ItemDataBound occurs the Item.Dataitem is always null. Why would this occur? below is my HTML repeater code <asp:Repeater ID="rptCustomSpaList" runat="server" onitemdatabound="rptCustomSpaList_ItemDataBound"> <HeaderTemplate> </HeaderTemplate> <ItemTemplate> <table> <tr> <td> <asp:Label ID="Label3" runat="server" Text="Spa Series:"></asp:Label> </td> <td> <asp:Label ID="Label4" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "SPASERIESVALUE") %>'></asp:Label> </td> </tr> <tr> <td> <asp:Label ID="Label5" runat="server" Text="Spa Model:"></asp:Label> </td> <td> <asp:Label ID="Label6" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "SPAMODELVALUE") %>'></asp:Label> </td> </tr> <tr> <td> <asp:Label ID="Label9" runat="server" Text="Acrylic Color:"></asp:Label> </td> <td> <asp:Label ID="Label10" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "ACRYLICCOLORVALUE") %>'></asp:Label> </td> </tr> <tr> <td> <asp:Label ID="Label11" runat="server" Text="Cabinet Color:"></asp:Label> </td> <td> <asp:Label ID="Label12" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "CABPANCOLORVALUE") %>'></asp:Label> </td> </tr> <tr> <td> <asp:Label ID="Label17" runat="server" Text="Cabinet Type:"></asp:Label> </td> <td> <asp:Label ID="Label18" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "CABINETVALUE") %>'></asp:Label> </td> </tr> <tr> <td> <asp:Label ID="Label13" runat="server" Text="Cover Color:"></asp:Label> </td> <td> <asp:Label ID="Label14" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "COVERCOLORVALUE") %>'></asp:Label> </td> </tr> </table> <asp:Label ID="Label15" runat="server" Text="Options:"></asp:Label> <asp:Repeater ID="rptCustomSpaItem" runat="server"> <HeaderTemplate> <table> </HeaderTemplate> <ItemTemplate> <tr> <td> <asp:Label ID="Label1" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "PROPERTY") %>'></asp:Label> </td> <td> <asp:Label ID="Label2" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "VALUE") %>'></asp:Label> </td> </tr> </ItemTemplate> <FooterTemplate> </table> </FooterTemplate> </asp:Repeater> <table> <tr> <td style="padding-top:15px;padding-bottom:30px;"> <asp:Label ID="Label7" runat="server" Text="Configured Price:"></asp:Label> </td> <td style="padding-top:15px;padding-bottom:30px;"> <asp:Label ID="Label8" runat="server" Text='<%#DataBinder.Eval(Container.DataItem, "SPAVALUEVALUE") %>'></asp:Label> </td> </tr> </table> <asp:Label ID="Label16" runat="server" Text="------"></asp:Label> </ItemTemplate> <FooterTemplate></FooterTemplate> </asp:Repeater>

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 4

    - by nikolaosk
    ?his is the fourth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line.There are 3 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.  In this new post we will build on the previous posts and we will demonstrate how to display the posters per category.We will add a ListView control on the PosterList.aspx and will bind data from the database. We will use the various templates.Then we will write code in the PosterList.aspx.cs to fetch data from the database.1) Launch Visual Studio and open your solution where your project lives2) Open the PosterList.aspx page. We will add some markup in this page. Have a look at the code below  <section class="posters-featured">                    <ul>                         <asp:ListView ID="posterList" runat="server"                            DataKeyNames="PosterID"                            GroupItemCount="3" ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters">                            <EmptyDataTemplate>                                      <table id="Table1" runat="server">                                            <tr>                                                  <td>We have no data.</td>                                            </tr>                                     </table>                              </EmptyDataTemplate>                              <EmptyItemTemplate>                                     <td id="Td1" runat="server" />                              </EmptyItemTemplate>                              <GroupTemplate>                                    <tr ID="itemPlaceholderContainer" runat="server">                                          <td ID="itemPlaceholder" runat="server"></td>                                    </tr>                              </GroupTemplate>                              <ItemTemplate>                                    <td id="Td2" runat="server">                                          <table>                                                <tr>                                                      <td>&nbsp;</td>                                                      <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <span class="PosterName">                                                        <%#:Item.PosterName%>                                                    </span>                                                </a>                                                            <br />                                                <span class="PosterPrice">                                                               <b>Price: </b><%#:String.Format("{0:c}", Item.PosterPrice)%>                                                </span>                                                <br />                                                        </td>                                                </tr>                                          </table>                                    </td>                              </ItemTemplate>                              <LayoutTemplate>                                    <table id="Table2" runat="server">                                          <tr id="Tr1" runat="server">                                                <td id="Td3" runat="server">                                                      <table ID="groupPlaceholderContainer" runat="server">                                                            <tr ID="groupPlaceholder" runat="server"></tr>                                                      </table>                                                </td>                                          </tr>                                          <tr id="Tr2" runat="server"><td id="Td4" runat="server"></td></tr>                                    </table>                              </LayoutTemplate>                        </asp:ListView>                    </ul>               </section>  3) We have a ListView control on the page called PosterList. I set the ItemType property to the Poster class and then the SelectMethod to the GetPosters method.  I will create this method later on.   (ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters")Then in the code below  I have the data-binding expression Item  available and the control becomes strongly typed.So when the user clicks on the link of the poster's category the relevant information will be displayed (photo,name and price)                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>4)  Now we need to write the simple method to populate the ListView control.It is called GetPosters method.The code follows   public IQueryable<Poster> GetPosters([QueryString("id")] int? PosterCatID)        {            PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (PosterCatID.HasValue && PosterCatID > 0)            {                query = query.Where(p=>p.PosterCategoryID==PosterCatID);            }            return query;                    } This is a very simple method that returns information about posters related to the PosterCatID passed to it.I bind the value from the query string to the PosterCatID parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable id.5) I run my application and then click on the "Midfilders" link. Have a look at the picture below to see the results.  In the Site.css file I added some new CSS rules to make everything more presentable. .posters-featured {    width:840px;    background-color:#efefef;}.posters-featured   a:link, a:visited,    a:active, a:hover {        color: #000033;    }.posters-featured    a:hover {        background-color: #85c465;    }  6) I run the application again and this time I do not choose any category, I simply navigate to the PosterList.aspx page. I see all the posters since no query string was passed as a parameter.Have a look at the picture below   ?ake sure you place breakpoints in the code so you can see what is really going on.In the next post I will show you how to display poster details.Hope it helps!!!

    Read the article

  • Application crashing when talking to oracle unless executable path contains spaces

    - by Lasse V. Karlsen
    We have an x-files problem with our .NET application. Or, rather, hybrid Win32 and .NET application. When it attempts to communicate with Oracle, it just dies. Vanishes. Goes to the big black void in the sky. No event log message, no exception, no nothing. If we simply ask the application to talk to a MS SQL Server instead, which has the effect of replacing the usage of OracleConnection and related classes with SqlConnection and related classes, it works as expected. Today we had a breakthrough. For some reason, a customer had figured out that by placing all the application files in a directory on his desktop, it worked as expected with Oracle as well. Moving the directory down to the root of the drive, or in C:\Temp or, well, around a bit, made the crash reappear. Basically it was 100% reproducable that the application worked if run from directory on desktop, and failed if run from directory in root. Today we figured out that the difference that counted was wether there was a space in the directory name or not. So, these directories would work: C:\Program Files\AppDir\Executable.exe C:\Temp Lemp\AppDir\Executable.exe C:\Documents and Settings\someuser\Desktop\AppDir\Executable.exe whereas these would not: C:\CompanyName\AppDir\Executable.exe C:\Programfiler\AppDir\Executable.exe <-- Program Files in norwegian C:\Temp\AppDir\Executable.exe I'm hoping someone reading this has seen similar behavior and have a "aha, you need to twiddle the frob on the oracle glitz driver configuration" or similar. Anyone? Followup #1: Ok, I've processed the procmon output now, both files from when I hit the button that attempts to open the window that triggers the cascade failure, and I've noticed that they keep track mostly, there's some smallish differences near the top of both files, and they they keep track a long way down. However, when one run fails, the other keeps going and the next few lines of the log output are these: ReadFile C:\oracle\product\10.2.0\db_1\BIN\orageneric10.dll SUCCESS Offset: 274 432, Length: 32 768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O ReadFile C:\oracle\product\10.2.0\db_1\BIN\orageneric10.dll SUCCESS Offset: 233 472, Length: 32 768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O After this, the working run continues to execute, and the other touches the mscorwks.dll files a few times before threads close down and the app closes. Thus, the failed run does not touch the above files. Followup #2: Figured I'd try to upgrade the oracle client drivers, but 10.2.0.1 is apparently the highest version available for Windows 2003 server and XP clients. Followup #3: Well, we've ended up with a black-box solution. Basically we found that the problem is somewhere related to XPO and Oracle. XPO has a system-table it manages, called XPObjectType, with three columns: Oid, TypeName and AssemblyName. Due to how Oracle is configured in the databases we talk to, the column names were OID, TYPENAME and ASSEMBLYNAME. This would ordinarily not be a problem, except that XPO talks to the schema information directly and checks if the table is there with the right column names, and XPO doesn't handle case differences so it sees a XPObjectType table with three unknown columns and none of those it expects. Exactly what XPO does now I don't really know, but if I dropped this table, and recreated it with the right case, using double quotes around all the column names to get the case right, the problem doesn't crop up. Exactly where the space in the folder name comes into this, I still have no idea, but this problem had two tiers: Stop the application from crashing at our customers, short-term solution Fix the bug, long-term solution Right now tier 1 is solved, tier 2 will be put back into the queue for now and prioritized. We're facing some bigger changes to our data tier anyway so this might not be a problem we need to solve, at least if all our Oracle-customers verify that the table-fix actually gets rid of the problem. I'll accept the answer by Dave Markle since though Process Monitor (the big brother of File Monitor) didn't actually pinpoint the problem, I was able to use it to determine that after my breakpoint in user-code where XPO had built up the query for this table, no I/O happened until all the entries for the application closing down was logged, which led me to believe it was this table that was the culprit, or at least influenced the problem somehow. If I manage to get to the real cause of this, I'll update the post.

    Read the article

  • pagination in php error

    - by fusion
    i've implemented this pagination class for my webpage in a separate file called class.pagination.php, but when i execute the page, nothing happens. it just displays a blank page. this is my search.php file, where i'm calling this class: <?php include 'config.php'; require ('class.pagination.php'); $search_result = ""; $search_result = $_GET["q"]; $search_result = trim($search_result); //Check if the string is empty if ($search_result == "") { echo "<p class='error'>Search Error. Please Enter Your Search Query.</p>" ; exit(); } //search query for multiple keywords if(!empty($search_result)) { // the table to search $table = "thquotes"; // explode search words into an array $arraySearch = explode(" ", $search_result); // table fields to search $arrayFields = array(0 => "cQuotes"); $countSearch = count($arraySearch); $a = 0; $b = 0; $query = "SELECT cQuotes, vAuthor, cArabic, vReference FROM ".$table." WHERE ("; $countFields = count($arrayFields); while ($a < $countFields) { while ($b < $countSearch) { $query = $query."$arrayFields[$a] LIKE '%$arraySearch[$b]%'"; $b++; if ($b < $countSearch) { $query = $query." AND "; } } $b = 0; $a++; if ($a < $countFields) { $query = $query.") OR ("; } } $query = $query.")"; $result = mysql_query($query, $conn) or die ('Error: '.mysql_error()); $totalrows = mysql_num_rows($result); if($totalrows < 1) { echo '<span class="error2">No matches found for "'.$search_result.'"</span>'; } else { ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php echo $cQuote; ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> <?php } } else { exit(); } //Setting Pagination $pagination = new pagination(); $pagination->byPage = 5; $pagination->rows = $totalrows; // number of records in a table-back mysql_num_rows () instance or another, you have to play $from = $pagination->fromPagination(); // sql used for applications such LIMIT $ from, $ pagination-> byPage $pages = $pagination->pages(); if(isset($pages)) {?> <div class="pagination"> <?foreach ($pages as $key){?> <?if($key['current'] == 1) {?> <a href="?p=<?=$key['p']?>" class="active"><?=$key['page']?></a> <?} else {?> <a href="?p=<?=$key['p']?>" class="inactive"><?=$key['page']?></a> <?}?> <?}?> </div> <?} //End Pagination ?>

    Read the article

  • How to bring out checboxes based on drop down list selection from DB

    - by user2199877
    I got stuck again. Can't overcome this step: loop through (in a form of checkboxes) pallets based on the lot drop down list selection, so it can be further submitted to complete the table. Please, please help. So, basically, first submit button (drop down menu) brings into the table lot number and description and also checkboxes to choose pallets. Second submit button (checboxes) brings into the table pallets numbers and weights. Thank you for any help. <?php mysql_connect('localhost','user',''); mysql_select_db('base'); $query="SELECT DISTINCT lot_number FROM pl_table"; $result=mysql_query($query); ?> <form action="" method="POST"> <select name="option_chosen"> <option>-- Select lot --</option> <?php while(list($lot_number)=mysql_fetch_row($result)) { echo "<option value=\"".$lot_number."\">".$lot_number."</option>"; } ?> </select> <input type='submit' name='submitLot' value='Submit' /> </form> <!-- need help here <h4>-- Select pallets --</h4> <form action="" method="POST"> <input type='submit' name='submitPal' value='Submit'/> </form> --> <table border="1" id="table"> <tr> <th width=80 height=30>Lot<br/>number</th> <th width=110 height=30>Description</th> <th width=90 height=30>Pallet<br/>number</th> <th width=60 height=30>Net</th> <th width=60 height=30>Gross</th> </tr> <?php if($_SERVER['REQUEST_METHOD'] =='POST') {$option_chosen=$_POST['option_chosen']; $query="SELECT * FROM pl_table WHERE lot_number='$option_chosen'"; $run=mysql_query($query); $row=mysql_fetch_array($run, MYSQLI_ASSOC); echo "<tr><td>".''."</td>"; echo "<td rowspan='5'>".$row['descr']."</td>"; echo "<td><b>".'Total weight'."<b></td>"; echo "<td>".''."</td><td>".''."</td></tr>"; echo "<td>".$row['lot_number']."</td>"; echo "<td colspan='3'>".''."</td>"; //This to be echoed when "select pallets" submited //echo "<tr><td>".$row['lot_number']."</td>"; //echo "<td>".$row['pallet_number']."</td>"; //echo "<td>".$row['net']."</td><td>".$row['gross']."</td></tr>"; } ?> </table> the table +--------------------------+-------------------------+---------+-------+ | id | lot_number | descr | pallet_number | net | gross | +--------------------------+-------------------------+---------+-------+ | 1 | 111 | black | 1 | 800 | 900 | | 2 | 111 | black | 2 | 801 | 901 | | 3 | 111 | black | 3 | 802 | 902 | | 4 | 222 | white | 1 | 800 | 900 | | 5 | 222 | white | 2 | 801 | 901 | | 6 | 222 | white | 3 | 802 | 902 | +--------------------------+-------------------------+---------+-------+

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • calling a java class in a servlet

    - by kawtousse
    hi, in my servlet i called an instance of a class.java( a class that construct an html table) in order to create this table in my jsp. the servlet is like the following: String report=request.getParameter("selrep"); String datev=request.getParameter("datepicker"); String op=request.getParameter("operator"); String batch =request.getParameter("selbatch"); System.out.println("report kind was:"+report); System.out.println("date was:"+datev); System.out.println("operator:"+op); System.out.println("batch:"+batch); if(report.equalsIgnoreCase("Report Denied")) { DeniedReportDisplay rd = new DeniedReportDisplay(); rd.ConstruireReport(); } else if(report.equalsIgnoreCase("Report Locked")) { LockedReportDisplay rl = new LockedReportDisplay(); rl.ConstruireReport(); } request.getRequestDispatcher("EspaceValidation.jsp").forward(request, response); in my jsp i can not display this table even empty or full. note: exemple a class that construct denied Report has this structure: /*constructeur*/ public DeniedReportDisplay() {} /*Methodes*/ @SuppressWarnings("unchecked") public StringBuffer ConstruireReport() { StringBuffer retour=new StringBuffer(); int i = 0; retour.append("<table border = 1 width=900 id=sheet align=left>"); retour.append("<tr bgcolor=#0099FF>" ); retour.append("<label> Denied Report</label>"); retour.append("</tr>"); retour.append("<tr>"); String[] nomCols ={"Nom","Prenom","trackingDate","activity","projectcode","WAName","taskCode","timeSpent","PercentTaskComplete","Comment"}; //String HQL_QUERY = null; for(i=0;i< nomCols.length;i++) { retour.append(("<td bgcolor=#0066CC>")+ nomCols[i] + "</td>"); } retour.append("</tr>"); retour.append("<tr>"); try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select opcemployees.Nom,opcemployees.Prenom,dailytimesheet.TrackingDate,dailytimesheet.Activity," + "dailytimesheet.ProjectCode,dailytimesheet.WAName,dailytimesheet.TaskCode," + "dailytimesheet.TimeSpent,dailytimesheet.PercentTaskComplete from Opcemployees opcemployees,Dailytimesheet dailytimesheet " + "where opcemployees.Matricule=dailytimesheet.Matricule and dailytimesheet.Etat=3 " + "group by opcemployees.Nom,opcemployees.Prenom" ); for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()){ Object[] row = (Object[]) it.next(); retour.append("<td>" +row [0]+ "</td>");//Nom retour.append("<td>" + row [1] + "</td>");//Prenom retour.append("<td>" + row [2] + "</td>");//trackingdate retour.append("<td>" + row [3]+ "</td>");//activity retour.append("<td>" + row [4] +"</td>");//projectcode retour.append("<td>" + row [5]+ "</td>");//waname retour.append("<td>" + row [6] + "</td>");//taskcode retour.append("<td>" + row [7] + "</td>");//timespent retour.append("<td>" + row [8] + "</td>");//perecnttaskcomplete retour.append("<td><input type=text /></td>");//case de commentaire } retour.append("</tr>"); } //terminer la table. retour.append ("</table>"); tx.commit(); } catch (HibernateException e) { retour.append ("</table><H1>ERREUR:</H1>" +e.getMessage()); e.printStackTrace(); } return retour; } thanks for help.

    Read the article

  • tables wrapping to next line when width 100%

    - by jmo
    I'm encountering some weirdness with tables in css. The layout is fairly simple, a fixed-width nav bar on the left and the content on the right. When the content includes a table with a width of 100% the table ends up getting pushed down until it has room to take up the full width of the screen (instead of just the area to the right of the nav bar). If I remove the width=100% from the table's css, then it looks fine, but obviously the table doesn't grow to fill the space of the div. The problem is that i want the table to grow and shrink with the window but still stay in the bounds of its div. Thanks. Here's a simple example: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Test</title> <style type="text/css"> #content { padding-right:20px; background:white; overflow:hidden; margin:20px; } #content .column { position:relative; padding-bottom: 20010px; margin-bottom: -20000px; } #center { width:100%; padding-top:15px; } body { min-width:700px; } #left { width: 330px; padding: 0 10px; padding-top:10px; float:left; } .tableData { width:100%; } </style> </head> <body> <div id="content"> <div class="column" id="left"> <div> Some text goes in here<br/> some more text<br/> some more text<br/> some more text<br/> some more text<br/> some more text<br/> </div> </div> <div class="column" id="center"> Some text at the top; <hr/> <table class="tableData"> <thead> <tr><th>A</th><th>B</th><th>C</th></tr> </thead> <tbody> <tr> <td>A1 A1 A1 A1</td> <td>B1 B1 B1 B1</td> <td>C1 C1 C1 C1 C</td> </tr> <tr> <td>A2 A2 A2 A2 </td> <td>B2 B2 B2 B2 </td> <td>C2 C2 C2 C2</td> </tr> <tr> <td>A3 A3 A3 A3 A3 </td> <td>B3 B3 B3 B3 B3 </td> <td>C3 C3 C3 C3 C3</td> </tr> <tr> <td>A4 A4 A4 A4 A4</td> <td>B4 B4 B4 B4 B4</td> <td>C4 C4 C4 C4 C4</td> </tr> </tbody> </table> </div> </div> </body> </html>

    Read the article

  • How do I mount a "DiskSecure Multiboot" partition?

    - by ????
    For a hard drive that has 4 or 5 partitions, I was able to mount one of them using Ubuntu LiveCD: sudo mount /dev/sda1 /mnt but is there a way to mount to the other partitions? (if using sudo fdisk -l, it only shows /dev/sda) GParted's snapshot is: Right now, the fdisk info is as follows: ubuntu@ubuntu:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1aca8ea5 Device Boot Start End Blocks Id System /dev/sda1 284993226 350602558 32804666+ 7 HPFS/NTFS/exFAT and then ubuntu@ubuntu:/mnt$ sudo fdisk -l /dev/sda1 Disk /dev/sda1: 33.6 GB, 33591978496 bytes 255 heads, 63 sectors/track, 4083 cylinders, total 65609333 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2052474d This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sda1p1 ? 6579571 1924427647 958924038+ 70 DiskSecure Multi-Boot /dev/sda1p2 ? 1953251627 3771827541 909287957+ 43 Unknown /dev/sda1p3 ? 225735265 225735274 5 72 Unknown /dev/sda1p4 2642411520 2642463409 25945 0 Empty Partition table entries are not in disk order Per @lgarzo's request, parted info is: ubuntu@ubuntu:/mnt$ sudo parted /dev/sda print Model: ATA ST3320820AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 146GB 180GB 33.6GB primary ntfs boot The command sudo mount /dev/sda1p2 /mnt won't work.

    Read the article

  • Como Exportar Crystal Reports a Excel, Word, Rich Text, PDF ó HTML

    - by jaullo
    Cuando trabajamos con reportes siempre requerimos la funcionalidad de exportación. En crystal reports para asp.net, realizar esta tarea es sumamente sencillo. Sin embargo la pregunta más grande que salta siempre, es como realizarlo utilizando código Behind. Para poder acceder a las librerias de crystal y sus componentes, primero debemos importar los espacios de nombres: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.Shared  CrystalDecisions.CrystalReports.Engine, nos servirá para poder manejar nuestro reportDocument y CrystalDecisions.Shared, será el medio que utilicemos para la exportación. Así que, veamos como podemos exportar nuestro informe sin tener que enviarlo a la impresora, recordemos que por defecto crystal reports ya tiene la opcion de exportar a PDF sin embargo debemos hacerlo tal como si fueramos a imprimir y que es lo que evitaremos acá. Colocamos un botón en nuestra pagina asp Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <asp:Button ID="btntopdf" runat="server" Text="Exportar a PDF" /> Y en nuestro boton deberemos ejecutar la siguiente rutina: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Protected Sub btntodpf_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btntopdf.Click          'Cargar reporte. Enlazando a la fuente de datos.        LoadReporte()          'Mas adelante veremos que estas lineas las podemos obviar        Response.Buffer = False        Response.Clear()  'ClearContent, ClearHeaders          reporteDoc.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "NombreArchivo")       End Sub LoadReport, es el encargado de llenar nuestro crystal con la fuente de datos. Está fue la primer forma de exporta nuestro crystal reports, pero no es la única, así que vamos a ver otra forma en la cual utilizaremos el metodo v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ExportToHttpResponse  Para este metodo, nuestro código en el botón cambia relativamente, pero antes de ello, daremos un repaso a los metodos utilizados. Nuestro primer parametro FormatType es un valor de tipo ExportFormatType, que puede corresponder a cualquiera de los metodos que enumeramos a continuación: CrystalReport: El formato al cual se exporta es de Tipo CrystalReport. Excel: El formato al cual se exporta es de tipo Excel ExcelRecord: El formato al cual se exporta es de Tipo Excel Record. NoFormat: No se ha especificado un formato de exportación. PortableDocFormat: El formato al cual se exporta es de Tipo PDF.  No voy a enumerar todos, pues me imagino que ya sabrán la idea de cada uno de los formatos, los numerados arriba son los mas importantes. Nuestro segundo parametro el objeto response nos permite adozar el archivo. Y por último, nuestro tercer parametro, definirá si debe ir como un objeto adjunto o no. Si lo colocamos en TRUE, estaremos enviando nuestro archivo como parametro, esto hará que no necesitemos las siguientes líneas de código: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Response.Buffer = False Response.Clear()   Con esto realizado, ya contamos con la posibilidad de enviar el archivo directamente al cliente.   Ahora si, veamos cuanto se ha reducido nuestro código: Unicamente nos quedan dos líneas de código en nuestro botón Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}        'Cargar reporte. Enlazando a la fuente de datos.        LoadReport()          reporteDoc.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "NombreArchivo")   Para finalizar, nada mas decir que espero esto les sea de ayuda y por supuesto,  que les facilite la vida con el uso de crystal reports.

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 – Part2

    - by pinaldave
    The best part of the having blog is that SQL Community helps to keep it running with new ideas. Earlier I wrote about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. A very popular article on that subject. I had used variables for “number of the rows” and “number of the pages”. Blog reader send me email asking in their organizations these values are stored in the table. Is there any the new syntax can read the data from the table. Absolutely YES! USE AdventureWorks2008R2 GO CREATE TABLE PagingSetting (RowsPerPage INT, PageNumber INT) INSERT INTO PagingSetting (RowsPerPage, PageNumber) VALUES(10,5) GO SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET (SELECT RowsPerPage*PageNumber FROM PagingSetting) ROWS FETCH NEXT (SELECT RowsPerPage FROM PagingSetting) ROWS ONLY GO Here is the quick script: This is really an easy trick. I also wrote blog post on comparison of the performance over here: . SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Paging

    Read the article

  • swapon --all --verbose : 'read swap header failed: Invalid argument'

    - by user66088
    Recently ran through EnableHibernateWithEncryptedSwap and ran the following command: swapon --all --verbose and received: 'read swap header failed: Invalid argument' How do I fix this? Here's some more pertinent output... Output of sudo fdisk -l: Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00006d20 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 156301311 77899777 5 Extended /dev/sda5 501760 156301311 77899776 8e Linux LVM Disk /dev/mapper/ubuntu--t10194-root: 75.5 GB, 75539415040 bytes 255 heads, 63 sectors/track, 9183 cylinders, total 147537920 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu--t10194-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu--t10194-swap_1: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders, total 8257536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x08040000 Disk /dev/mapper/ubuntu--t10194-swap_1 doesn't contain a valid partition table Disk /dev/mapper/cryptswap1: 4225 MB, 4225761280 bytes 255 heads, 63 sectors/track, 513 cylinders, total 8253440 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd2236983 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Thanks for any and ALL help!

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Limitation of the View 12

    - by pinaldave
    I have previously written on the subject SQL SERVER – The Limitations of the Views – Eleven and more…. This was indeed a very popular series and I had received lots of feedback on that topic. Today we are going to discuss something very interesting as well. During my recent performance tuning seminar in Hyderabad, I presented on the subject of Views. During the seminar, one of the attendees asked a question: We create a table and create a View on the top of it. On the same view, if we create Index, when querying View, will that index be used? The answer is NOT Always! (There is only one specific condition when it will be used. We will write about that later in the next post). Let us see the test case for the same. In our script we will do following: USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView GO Let us check the execution plan for the last SELECT statement. You can see from the execution plan. That even though we are querying View and the View has index, it is not really using that index. In the next post, we will see the significance of this View and where it can be helpful. Meanwhile, I encourage you to read my View series: SQL SERVER – The Limitations of the Views – Eleven and more…. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQL View, T SQL, Technology

    Read the article

  • Using Oracle Data in the Business Rules Engine

    - by Christopher House
    Yesterday I started working on some new functionality that I had planned to implement using the Business Rules Engine.  As I got further into it, I realized that some of my rules were going to need to reference some data that resides in an Oracle database.  I knew the Business Rules Composer supports using DataConnections and TypedDataTables, but I’d never used this functionality myself, so I wasn’t so sure how it would work with Oracle.  As it turns out, it’s very do-able, there’s just little hoop you need to jump through. I fired up BRC and my suspicions were quickly confirmed.  BRC only recognizes SQL Server databases when it comes to editing rules.  Not letting that deter me, I decided to see if I could “trick” BRE into using Oracle data. On my local SQL server, I created a new database and in that database, created a table that matched the schema of the table I wanted to use in the Oracle database.  I then set about creating my rules, referencing the new SQL Server database everywhere I wanted to use Oracle data.  Finally, I created a new class library and added a class that implements Microsoft.RuleEngine.IFactRetriever.  In that class, I added the necessary code to get a DataSet from the Oracle server, wrap it in a TypedDataTable and assert it into the rule engine.  It’s worth pointing out that in my IFactRetriever class, I made sure to set my DataSet name to the name of the database I’d referenced in the BRC and the DataTable’s name to the name of the table that I’d referenced in the BRC. After gac’ing the new class library and deploying my policy, I tested and everything worked as expected.

    Read the article

  • How to design database for tests in online test application

    - by Kien Thanh
    I'm building an online test application, the purpose of app is, it can allow teacher create courses, topics of course, and questions (every question has mark), and they can create tests for students and students can do tests online. To create tests of any courses for students, first teacher need to create a test pattern for that course, test pattern actually is a general test includes the number of questions teacher want it has, then from that test pattern, teacher will generate number of tests corresponding with number of students will take tests of that course, and every test for student will has different number of questions, although the max mark of test in every test are the same. Example if teacher generate tests for two students, the max mark of test will be 20, like this: Student A take test with 20 questions, student B take test only has 10 questions, it means maybe every question in test of student A only has mark is 1, but questions in student B has mark is 2. So 20 = 10 x 2, sorry for my bad English but I don't know how to explain it better. I have designed tables for: - User (include students and teachers account) - Course - Topic - Question - Answer But I don't know how to define associations between user and test pattern, test, question. Currently I only can think these: Test pattern table: name, description, dateStart, dateFinish, numberOfMinutes, maxMarkOfTest Test table: test_pattern_id And when user (is Student) take tests, I think i will have one more table: Result: user_id, test_id, mark but I can't set up associations among test pattern and test and question. How to define associations?

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL – NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer

    - by Pinal Dave
    I recently wrote a four-part series on how I started to learn about and begin my journey with NuoDB. Big Data is indeed a big world and the learning of the Big Data is like spaghetti – no one knows in reality where to start, so I decided to learn it with the help of NuoDB. You can download NuoDB and continue your journey with me as well. Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB …and in this blog post we will try to answer the most asked question about NuoDB. “I like the NuoDB Explorer but can I connect to NuoDB from my preferred Graphical User Interface?” Honestly, I did not expect this question to be asked of me so many times but from the question it is clear that we developers absolutely want to learn new things and along with that we do want to continue to use our most efficient developer tools. Now here is the answer to the question: “Absolutely, you can continue to use any of the following most popular SQL clients.” NuoDB supports the three most popular 3rd-party SQL clients. In all the leading development environments there are always more than one database installed and managing each of them with a different tool is often a very difficult task. Developers like to use one tool, which can control most of the databases. Once developers are familiar with one database tool it is very difficult for them to switch to another tool. This is particularly difficult when we developers find that tool to be the key reason for our efficiency. Let us see how to install each of the NuoDB supported 3rd party tools along with a quick tutorial on how to go about using them. SQuirreL SQL Client First download SQuirreL Universal SQL client. On the Windows platform you can double-click on the file and it will install the SQuirrel client. Once it is installed, open the application and it will bring up the following screen. Now go to the Drivers tab on the left side and scroll it down. You will find NuoDB mentioned there. Now right click over it and click on Modify Driver. Now here is where you need to make sure that you make proper entries or your client will not work with the database. Enter following values: Name: NuoDB Example URL: jdbc:com:nuodb://localhost:48004/test Website URL: http://www.nuodb.com Now click on the Extra Class Path tab and Add the location of the nuodbjdbc.jar file. If you are following my blog posts and have installed NuoDB in the default location, you will find the default path as C:\Program Files\NuoDB\jar\nuodbjdbc.jar. The class name of the driver is automatically populated. Once you click OK you will see that there is a small icon displayed to the left of NuoDB, which shows that you have successfully configured and installed the NuoDB driver. Now click on the tab of Alias tab and you can notice that it is empty. Now click on the big Plus icon and it will open screen of adding an alias. “Alias” means nothing more than adding a database to your system. The database name of the original installation can be anything and, if you wish, you can register the database with any other alternative name. Here are the details you should fill into the Alias screen below. Name: Test (or your preferred alias) Driver: NuoDB URL: jdbc:com:nuodb://localhost:48004/test (This is for test database) User Name: dba (This is the username which I entered for test Database) Password: goalie (This is the password which I entered for test Database) Check Auto Logon and Connect at Startup and click on OK. That’s it! You are done. On the right side you will see a table name and on the left side you will see various tabs with all the relevant details from respective table. You can see various metadata, schemas, data types and other information in the table. In addition, you can also generate script and do various important tasks related to database. You can see how easy it is to configure NuoDB with the SQuirreL Client and get going with it immediately. SQL Workbench/J This is another wonderful client tool, which works very well with NuoDB. The best part is that in the Driver dropdown you will see NuoDB being mentioned there. Click here to download  SQL Workbench/J Universal SQL client. The download process is straight forward and the installation is a very easy process for SQL Workbench/J. As soon as you open the client, you will notice on following screen the NuoDB driver when selecting a New Connection Profile. Select NuoDB from the drop down and click on OK. In the driver information, enter following details: Driver: NuoDB (com.nuodb.jdbc.Driver) URL: jdbc:com.nuodb://localhost/test Username: dba Password: goalie While clicking on OK, it will bring up the following pop-up. Click Yes to edit the driver information. Click on OK and it will bring you to following screen. This is the screen where you can perform various tasks. You can write any SQL query you want and it will instantly show you the results. Now click on the database icon, which you see right on the left side of the word User=dba.  Once you click on Database Explorer, you can perform various database related tasks. As a developer, one of my favorite tasks is to look at the source of the table as it gives me a proper view of the structure of the database. I find SQL Workbench/J very efficient in doing the same. DbVisualizer DBVisualizer is another great tool, which helps you to connect to NuoDB and retrieve database information in your desired format. A developer who is familiar with DBVisualizer will find this client to be very easy to work with. The installation of the DBVisualizer is very pretty straight forward. When we open the client, it will bring us to the following screen. As a first step we need to set up the driver. Go to Tools >> Driver Manager. It will bring up following screen where we set up the diver. Click on Create Driver and it will open up the driver settings on the right side. On the right side of the area where it displays Driver Settings please enter the following values- Name: NuoDB URL Format: jdbc:com.nuodb://localhost:48004/test Now under the driver path, click on the folder icon and it will ask for the location of the jar file. Provide the path as a C:\Program Files\NuoDB\jar\nuodbjdbc.jar and click OK. You will notice there is a green button displayed at the bottom right corner. This means the driver is configured properly. Once driver is configured properly, we can go to Create Database Connection and create a database. If the pop up show up for the Wizard. Click on No Wizard and continue to enter the settings manually. Here is the Database Connection screen. This screen can be bit tricky. Here are the settings you need to remember to enter. Name: NuoDB Database Type: Generic Driver: NuoDB Database URL: jdbc:com.nuodb://localhost:48004/test Database Userid: dba Database Password: goalie Once you enter the values, click on Connect. Once Connect is pressed, it will change the button value to Reconnect if the connection is successfully established and it will show the connection details on lthe eft side. When we further explore the NuoDB, we can see various tables created in our test application. We can further click on the right side screen and see various details on the table. If you click on the Data Tab, it will display the entire data of the table. The Tools menu also has some very interesting and cool features like Driver Manager, Data Monitor and SQL History. Summary Well, this was a relatively long post but I find it is extremely essential to cover all the three important clients, which we developers use in our daily database development. Here is my question to you? Which one of the following is your favorite NuoDB 3rd-Party Database Client? (Pick One) SQuirreL SQL Client SQL Workbench/J DbVisualizer I will be very much eager to read your experience about NuoDB. You can download NuoDB from here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Benchmarking MySQL on win7

    - by Patrick
    I've setup a nginx server running php 5.3.6 and mysql 5.5.1.3. My computer is an AMD quadcore 9650, 4gb ram, 500gb 7200rpm HD. I ran the PHP MySQL Benchmark Tool v. 0.1, and got the following results: Testing a(n) MYISAM table using 100000 rows. Successfully created database speedtestdb Sucessfully created table speedtesttable Table Type Verified: MYISAM .. Done. 100000 inserts in 19.73628 seconds or 5067 inserts per second. Done. 100000 row reads in 0.2801 seconds or 357015 row reads per second. Done. 100000 updates in 4.03876 seconds or 24760 updates per second. I'm wondering where this stands as far as performance goes, and what are some steps I can take if any to improve on this. I'm not trying to make anything fantastic, just getting a feel for how to best optimize a web server in this configuration.

    Read the article

  • Optimize SUMMARIZE with ADDCOLUMNS in Dax #ssas #tabular #dax #powerpivot

    - by Marco Russo (SQLBI)
    If you started using DAX as a query language, you might have encountered some performance issues by using SUMMARIZE. The problem is related to the calculation you put in the SUMMARIZE, by adding what are called extension columns, which compute their value within a filter context defined by the rows considered in the group that the SUMMARIZE uses to produce each row in the output. Most of the time, for simple table expressions used in the first parameter of SUMMARIZE, you can optimize performance by removing the extended columns from the SUMMARIZE and adding them by using an ADDCOLUMNS function. In practice, instead of writing SUMMARIZE( <table>, <group_by_column>, <column_name>, <expression> ) you can write: ADDCOLUMNS(     SUMMARIZE( <table>, <group by column> ),     <column_name>, CALCULATE( <expression> ) ) The performance difference might be huge (orders of magnitude) but this optimization might produce a different semantic and in these cases it should not be used. A longer discussion of this topic is included in my Best Practices Using SUMMARIZE and ADDCOLUMNS article on SQLBI, which also include several details about the DAX syntax with extended columns. For example, did you know that you can create an extended column in SUMMARIZE and ADDCOLUMNS with the same name of existing measures? It is *not* a good thing to do, and by reading the article you will discover why. Enjoy DAX!

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >