Search Results

Search found 5174 results on 207 pages for 'goku da master'.

Page 75/207 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • OEG11gR2 integration with OES11gR2 Authorization with condition

    - by pgoutin
    Introduction This OES use-case has been defined originally by Subbu Devulapalli (http://accessmanagement.wordpress.com/).  Based on this OES museum use-case, I have developed the OEG11gR2 policy able to deal with the OES authorization with condition. From an OEG point of view, the way to deal with OES condition is to provide with the OES request some Environmental / Context Attributes.   Museum Use-Case  All painting in the museum have security sensors, an alarm goes off when a person comes too close a painting. The employee designated for maintenance needs to use their ID and disable the alarm before maintenance. You are the Security Administrator for the museum and you have been tasked with creating authorization policies to manage authorization for different paintings. Your first task is to understand how paintings are organized. Asking around, you are surprised to see that there isno formal process in place, so you need to start from scratch. the museum tracks the following attributes for each painting 1. Name of the work 2. Painter 3. Condition (good/poor) 4. Cost You compile the list of paintings  Name of Painting  Painter  Paint Condition  Cost  Mona Lisa  Leonardo da Vinci  Good  100  Magi  Leonardo da Vinci  Poor  40  Starry Night  Vincent Van Gogh  Poor  75  Still Life  Vincent Van Gogh  Good  25 Being a software geek who doesn’t (yet) understand art, you feel that price(or insurance price) of a painting is the most important criteria. So you feel that based on years-of-experience employees can be tasked with maintaining different paintings. You decide that paintings worth over 50 cost should be only handled by employees with over 20 years of experience and employees with less than 10 years of experience should not handle any painting. Lets us start with policy modeling. All paintings have a common set of attributes and actions, so it will be good to have them under a single Resource Type. Based on this resource type we will create the actual resources. So our high level model is: 1) Resource Type: Painting which has action manage and the following four attributes a) Name of the work b) Painter c) Condition (good/poor) d) Cost 2) To keep things simple lets use painting name for Resource name (in real world you will try to use some identifier which is unique, because in future we may end up with more than one painting which has the same name.) 3) Create Resources based on the previous table 4) Create an identity attribute Experience (Integer) 5) Create the following authorization policies a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience OES Authorization Configuration We do need to create 2 authorization policies with specific conditions a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience We don’t need an explicit policy for Deny access to all paintings for employees with less than 10 year of experience, because Oracle Entitlements Server will automatically deny if there is no matching policy. OEG Policy The OEG policy looks like the following The 11g Authorization filter configuration is similar to :  The ${PAINTING_NAME} and ${USER_EXPERIENCE} variables are initialized by the "Retrieve from the HTTP header" filters for testing purpose. That's to say, under Service Explorer, we need to provide 2 attributes "Experience" & "Painting" following the OES 11g Authorization filter described above.

    Read the article

  • Upgraded to 12.04 now wifi doesn't work

    - by Benito Kestelman
    My laptop's wifi stopped working when I upgraded to Ubuntu 12.04 (wired works). I just reinstalled 12.04 over my old 12.04 on which wifi didn't work either in an attempt to restore any settings I may have accidentally changed, but it still doesn't work. I also used a wired connection to install updates in case this bug has been fixed, but it has not. Here is the result of sudo lshw -class network: *-network description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 67 serial: 40:25:c2:5f:5b:f4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-29-generic-pae firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:51 memory:de800000-de801fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c0 serial: 14:da:e9:c0:da:78 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:54 memory:dd400000-dd43ffff ioport:a000(size=128) Here is rfkill list all: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: asus-wimax: WiMAX Soft blocked: no Hard blocked: no lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 8087:07d6 Intel Corp. Bus 001 Device 004: ID 13d3:5710 IMC Networks Bus 002 Device 003: ID 045e:0745 Microsoft Corp. Nano Transceiver v1.0 for Bluetooth Bus 003 Device 003: ID 0781:5530 SanDisk Corp. Cruzer lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) 03:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 04:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0)

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • CodePlex Daily Summary for Sunday, November 20, 2011

    CodePlex Daily Summary for Sunday, November 20, 2011Popular ReleasesFree SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadednopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Cetacean Monitoring: Cetacean Monitoring Project Release V 0.1: This is a zip with a working executable for evaluation purposes.WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Full Demo VS2008 Very Simple Demo VS2010 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblySQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 5: 1. added basic schema support 2. added server instance name and process id 3. fixed problem with object search index out of range 4. improved version comparison with previous/next difference navigation 5. remeber main window spliter and object explorer spliter positionAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...DotNetNuke® Community Edition: 06.01.01: Major Highlights Fixed problem with the core skin object rendering CSS above the other framework inserted files, which caused problems when using core style skin objects Fixed issue with iFrames getting removed when content is saved Fixed issue with the HTML module removing styling and scripts from the content Fixed issue with inserting the link to jquery after the header of the page Security Fixesnone Updated Modules/Providers ModulesHTML version 6.1.0 ProvidersnoneDotNetNuke Performance Settings: 01.00.00: First release of DotNetNuke SQL update queries to set the DNN installation for optimimal performance. Please review and rate this release... (stars are welcome)SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...C.B.R. : Comic Book Reader: CBR 0.3: New featuresAdd magnifier size and scale New file info view in the backstage Add dynamic properties on book and settings Sorting and grouping in the explorer with new design Rework on conversion : Images, PDF, Cbr/rar, Cbz/zip, Xps to the destination formats Images, Cbz and XPS ImprovmentsSuppress MainViewModel and ExplorerViewModel dependencies Add view notifications and Messages from MVVM Light for ViewModel=>View notifications Make thread better on open catalog, no more ihm freeze, less t...Desktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbRDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0New ProjectsAutonomous Robot Combat: The project is about a Robotic game arena where 6-8 robots will engage in a team combat using infrared guns. The infrared guns will also have red light LEDs to simulate muzzle flash. Once the project finishes, it will be displayed in University of Plymouth.B2BPro: B2BPro best solution for businessBattlestar Galactica Fighters: Battlestar Galactica Fighters is a 3D vertical scrolling shoot 'em up. It's developed in C# and F# for the XNA Programming exam at Master in Computer Game Developement 2011/2012 in Verona, Italy.Content Compiler 3 - Compile your XNA media outside Visual Studio!: The Content Compiler helps you to compile your media files for use with XNA without using Visual Studio. After almost three years of Development, the third version of the CCompiler is nearly finished and we decided to put it up on Codeplex to "keep it alive".CreaMotion NHibernate Class Builder: NHibernate Class Builder C# , WPF Supports all type relations Supports MsSql, MySql -- Specially developed for NHibernate Learnersecs_tqdt_kd: Electric Custom System Thông quan di?n t? Kinh DoanhIMDb Helper: IMDb Helper is a C# library that provides access to information in the IMDb website. IMDb Helper uses web requests to access the IMDb website, and regular expressions to parse the responses (it doesn't use any external library, only pure .NET). Lion Solution: This is an open source Accounting project for small business usage. Developed by: Sleiman Jneidi Hussein Zawawi Management of Master Lists in SharePoint with Choice Filter Lookup column: Management of Master Lists in SharePoint with Choice Filter Lookup column you can view the detail of the project on http://sharepointarrow.blogspot.com/2011/11/management-of-master-lists-in.htmlMRDS FezMini Robot Brick: This is an attemp to write services for MRDS to control a FezMini Robot with a wireless connection attached to COM2 on the FezMini Board.MVC Route Unit Tester: Provides convenient, easy to use methods that let you unit test the route table in your ASP.NET MVC application. Unlike many libraries, this lets you test routes both ways -- both incoming and going. You can specify an incoming request and assert that it matches a given route (or that there are no matches). You can also specify route data and assert that a given URL will be generated by your application.MyWalk: MyWalk (codename: MyLife) is a novel health application that makes tracking walking a part of daily life. NGeo: NGeo makes it easier for users of geographic data to invoke GeoNames and Yahoo! GeoPlanet / PlaceFinder services. You'll no longer have to write your own GeoNames, GeoPlanet, or PlaceFinder clients. It's developed in ASP.NET 4.0, and uses WCF ServiceModel libraries to deserialize JSON data into Plain Old C# Objects.Octopus Tools: Octopus is an automated deployment server for .NET applications, powered by NuGet. OctopusTools is a set of useful command line and MSBuild tasks designed for automating Octopus.PDV Moveis: Loja MoveisReddit#: Reddit# is a Reddit library for C# or other .Net languages.Rubik: Rubik is, simply, a stab at creating a decent implementation of a Rubik's cube in WPF, and in the process aplying MVVM to the 3D game milieu.Sample Service-Oriented Architecture: Sample of service-oriented architecture using WCF.SFinger: SFinger adds two finger scrolling to synaptics touchpad on Windows. SigemFinal: Versión final del proyecto de diseñoSlResource: silverlight resource managementSonce - Simple ON-line Circuit Editor: Circuit Editor in Silverlight, unfinished student project written in C#Tricofil: Site da Tricofil com Administrador de ConteudoTwincat Ads .Net Client: This is the client implementation of the Ads/Ams protocol created by Beckhoff. (I'm not affiliated with Beckhoff) This implementation will be in C# and will not depend on other libraries. This means it can be used in silverlight and windows phone projects. This project is not finished yet!!WomanMagazine: This is a woman Online Magazine with lot of info and entertainment resources for women

    Read the article

  • How to bind a datatable to a wpf editable combobox: selectedItem showing System.Data.DataRowView

    - by black sensei
    Hello Good People!! I bound a datatable to a combobox and defined a dataTemplate in the itemTemplate.i can see desired values in the combobox dropdown list,what i see in the selectedItem is System.Data.DataRowView here are my codes: <ComboBox Margin="128,139,123,0" Name="cmbEmail" Height="23" VerticalAlignment="Top" TabIndex="1" ToolTip="enter the email you signed up with here" IsEditable="True" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding}"> <ComboBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=username}"/> </StackPanel> </DataTemplate> </ComboBox.ItemTemplate> The code behind is so : if (con != null) { con.Open(); //users table has columns id | username | pass SQLiteCommand cmd = new SQLiteCommand("select * from users", con); SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); userdt = new DataTable("users"); da.Fill(userdt); cmbEmail.DataContext = userdt; } I've been looking for something like SelectedValueTemplate or SelectedItemTemplate to do the same kind of data templating but i found none. I'll like to ask if i'm doing something wrong or it's a known issue for combobox binding? if something is wrong in my code please point me to the right direction. thanks for reading this

    Read the article

  • Populating FullCalendar events from MVC

    - by jasonmhirst
    I've having difficulty in populating FullCalendar from MVC and would like a little assistance on the matter please. I have the following code for my controller: Function GetEvents(ByVal [start] As Double, ByVal [end] As Double) As JsonResult Dim sqlConnection As New SqlClient.SqlConnection sqlConnection.ConnectionString = My.Settings.sqlConnection Dim sqlCommand As New SqlClient.SqlCommand sqlCommand.CommandText = "SELECT tripID AS ID, tripName AS Title, DATEDIFF(s, '1970-01-01 00:00:00', dateStart) AS [Start], DATEDIFF(s, '1970-01-01 00:00:00', dateEnd) AS [End] FROM tblTrip WHERE userID=18 AND DateStart IS NOT NULL" sqlCommand.Connection = sqlConnection Dim ds As New DataSet Dim da As New SqlClient.SqlDataAdapter(sqlCommand) da.Fill(ds, "Meetings") sqlConnection.Close() Dim meetings = From c In ds.Tables("Meetings") Select {c.Item("ID"), c.Item("Title"), "False", c.Item("Start"), c.Item("End")} Return Json(meetings.ToArray(), JsonRequestBehavior.AllowGet) End Function This does indeed run correctly but the format that is returned is: [[25,"South America 2008","False",1203033600,1227657600],[48,"Levant 2009","False",1231804800,1233619200],[49,"South America 2009","False",1235433600,1237420800],[50,"Italy 2009","False",1241049600,1256083200],[189,"Levant 2010a","False",1265414400,1267574400],[195,"Levant 2010a","False",1262736000,1262736000],[208,"Levant 2010a","False",1264982400,1267574400],[209,"Levant 2010a","False",1264982400,1265587200],[210,"Levant 2010","False",1264982400,1266969600],[211,"Levant 2010 b","False",1267056000,1267574400],[213,"South America 2010a","False",1268438400,1269648000],[214,"Levant 2010 c","False",1266364800,1264118400],[215,"South America 2010a","False",1268611200,1269648000],[217,"South America 2010","False",1268611200,1269561600],[218,"South America 2010 b","False",1268956800,1269388800],[227,"levant 2010 b","False",1265846400,1266192000]] And this is totally different to what I've seen on the post from here: http://stackoverflow.com/questions/2445359/jquery-fullcalendar-json-date-issue (note the lack of tag information and curly braces) Can someone please explain to me what I may be doing wrong and why my output isn't correctly formatted. TIA

    Read the article

  • problem with radgrid in pageindexchange event

    - by user203127
    Hi, I Have radgrid in my page. When i turn off view state and in pageindexchanged event when click next page i am getting nothing. Simply a blank page. But when i turn on view state I am getting data in next pages. Is there any way to get the data. I cant turn on the view state due to performance issue. Please see the code below for the reference. .aspx <telerik:RadGrid ID="RadGrid1" OnSortCommand="RadGrid1_SortCommand" OnPageIndexChanged="RadGrid1_PageIndexChanged" AllowSorting="True" PageSize="20" ShowGroupPanel="True" AllowPaging="True" AllowMultiRowSelection="True" AllowFilteringByColumn="true" AutoGenerateColumns="false" EnableViewState="false" runat="server" GridLines="None" OnItemUpdated="RadGrid1_ItemUpdated" OnDataBound="RadGrid1_DataBound"> aspx.cs public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { LoadData(); } private void LoadData() { SqlConnection SqlConn = new SqlConnection("uid=tempuser;password=tempuser;data source=USWASHL10015\\SQLEXPRESS;database=CCOM;"); SqlCommand cmd = new SqlCommand(); cmd.Connection = SqlConn; cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "usp_testing"; SqlDataAdapter da = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); da.Fill(ds); RadGrid1.DataSource = ds; RadGrid1.DataBind(); //RadGrid1.ClientSettings.AllowDragToGroup = true; } protected void RadGrid1_PageIndexChanged(object source, Telerik.Web.UI.GridPageChangedEventArgs e) { //RadGrid1.Rebind(); LoadData(); }

    Read the article

  • Very strange iSeries Provider behavior

    - by AJ
    We've been given a "stored procedure" from our RPG folks that returns six data tables. Attempting to call it from .NET (C#, 3.5) using the iSeries Provider for .NET (tried using both V5R4 and V6R1), we are seeing different results based on how we call the stored proc. Here's way that we'd prefer to do it: using (var dbConnection = new iDB2Connection("connectionString")) { dbConnection.Open(); using(var cmd = dbConnection.CreateCommand()) { cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "StoredProcName"; cmd.Parameters.Add(new iDB2Parameter("InParm1", iDB2DbType.Varchar).Value = thing; var ds = new DataSet(); var da = new iDB2DataAdapter(cmd); da.Fill(ds); } } Doing it this way, we get FIVE tables back in the result set. However, if we do this: cmd.CommandType = CommandType.Text; cmd.CommandText = "CALL StoredProcName('" + thing + "')"; We get back the expected SIX tables. I realize that there aren't many of us sorry .NET-to-DB2 folks out here, but I'm hoping someone has seen this before. TIA.

    Read the article

  • Reading,Writing, Editing BLOBS through DataTables and DataRows

    - by Soham
    Consider this piece of code: DataSet ds = new DataSet(); SQLiteDataAdapter Da = new SQLiteDataAdapter(Command); Da.Fill(ds); DataTable dt = ds.Tables[0]; bool PositionExists; if (dt.Rows.Count > 0) { PositionExists = true; } else { PositionExists = false; } if (PositionExists) { //dt.Rows[0].Field<>("Date") } Here the "Date" field is a BLOB. My question is, a. Will reading through the DataAdapter create any problems later on, when I am working with BLOBS? More, specifically, will it read the BLOB properly? b. This was the read part.Now when I am writing the BLOB to the DB,its a queue actually. I.e I am trying to store a queue in MySQLite using a BLOB. Will Conn.ExecuteNonQuery() serve my purpose? c. When I am reading the BLOB back from the DB, can I edit it as the original datatype, it used to be in C# environment i.e { Queue - BLOB - ? } {C# -MySQL - C# } So in this context, Date field was a queue, I wrote it back as a BLOB, when reading it back, can I access it(and edit) as a queue? Thank You. Soham

    Read the article

  • SqlDataAdaptor why get a different query

    - by Murat
    Hi All, I have a database reader class. But i want to use this class, sometimes SqlDataAdaptor.Fill() method get another query to Datatable. I look at Query in SqlCommand Instance and my query is correct but returning values from different table? What is the problem? This My Code public static DataTable GetLatestRecords(string TableName, string FilterFieldName, int RecordCount, params DBFieldAndValue[] FilterFields) { DataTable DT = new DataTable(); Type ClassType = typeof(T); string SelectString = string.Format("SELECT TOP ({0}) * FROM {1}", RecordCount, TableName); if (FilterFields.Length > 0) { SelectString += " WHERE 1 = 1 "; for (int index = 0; index < FilterFields.Length; index++) SelectString += FilterFields[index].ToString(index); } SelectString += string.Format(" ORDER BY {0} DESC", FilterFieldName); try { SqlConnection Connection = GetConnection<T>(); SqlCommand Command = new SqlCommand(SelectString, Connection); for (int index = 0; index < FilterFields.Length; index++) Command.Parameters.AddWithValue("@" + FilterFields[index].Field + "_" + index, FilterFields[index].Value); if (Connection.State == ConnectionState.Closed) Connection.Open(); Command.CommandText = SelectString; SqlDataAdapter DA = new SqlDataAdapter(Command); DT.Dispose(); DT = new DataTable(); DA.Fill(DT); if (Connection.State == ConnectionState.Open) Connection.Close(); } catch (Exception ex) { WriteLogStatic(ex.Message); throw new Exception(@"Veritabani islemleri sirasinda hata olustu!\nAyrintilar 'C:\Temp' dizini altindadir.\n" + ex.Message); } return DT; }

    Read the article

  • Autocomplet extender control in ajax (asp.net)?

    - by Surya sasidhar
    hi, i am using autocomplete extender in my application but it is not working. this is my code.. <form id="form1" runat="server"> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <br /> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <div> <cc1:AutoCompleteExtender TargetControlID="TextBox1" MinimumPrefixLength="1" ServiceMethod="GetVideoTitles" CompletionSetCount="10" ServicePath="Myservices.asmx" ID="AutoCompleteExtender1" runat="server"> </cc1:AutoCompleteExtender> </div> </form> this is webservice method public string[] GetVideoTitles(string prefixText) { SqlConnection con = new SqlConnection(@"Data Source=SERVER5\SQLserver2005;Initial Catalog=tpvnew;User ID=xx;Password=525"); con.Open(); SqlCommand cmd = new SqlCommand("video_videotitles", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@prefixText", SqlDbType.VarChar, 50); cmd.Parameters["@prefixText"].Value = prefixText; SqlDataAdapter da = new SqlDataAdapter(cmd); DataTable dt = new DataTable(); da.Fill(dt); string[] items = new string[dt.Rows.Count]; int i = 0; foreach (DataRow dr in dt.Rows) { items.SetValue(dr["videotitle"].ToString(), i); i++; } return items; }

    Read the article

  • Create a unique ID by fuzzy matching of names (via agrep using R)

    - by tbrambor
    Using R, I am trying match on people's names in a dataset structured by year and city. Due to some spelling mistakes, exact matching is not possible, so I am trying to use agrep() to fuzzy match names. A sample chunk of the dataset is structured as follows: df <- data.frame(matrix( c("1200013","1200013","1200013","1200013","1200013","1200013","1200013","1200013", "1996","1996","1996","1996","2000","2000","2004","2004","AGUSTINHO FORTUNATO FILHO","ANTONIO PEREIRA NETO","FERNANDO JOSE DA COSTA","PAULO CEZAR FERREIRA DE ARAUJO","PAULO CESAR FERREIRA DE ARAUJO","SEBASTIAO BOCALOM RODRIGUES","JOAO DE ALMEIDA","PAULO CESAR FERREIRA DE ARAUJO"), ncol=3,dimnames=list(seq(1:8),c("citycode","year","candidate")) )) The neat version: citycode year candidate 1 1200013 1996 AGUSTINHO FORTUNATO FILHO 2 1200013 1996 ANTONIO PEREIRA NETO 3 1200013 1996 FERNANDO JOSE DA COSTA 4 1200013 1996 PAULO CEZAR FERREIRA DE ARAUJO 5 1200013 2000 PAULO CESAR FERREIRA DE ARAUJO 6 1200013 2000 SEBASTIAO BOCALOM RODRIGUES 7 1200013 2004 JOAO DE ALMEIDA 8 1200013 2004 PAULO CESAR FERREIRA DE ARAUJO I'd like to check in each city separately, whether there are candidates appearing in several years. E.g. in the example, PAULO CEZAR FERREIRA DE ARAUJO PAULO CESAR FERREIRA DE ARAUJO appears twice (with a spelling mistake). Each candidate across the entire data set should be assigned a unique numeric candidate ID. The dataset is fairly large (5500 cities, approx. 100K entries) so a somewhat efficient coding would be helpful. Any suggestions as to how to implement this?

    Read the article

  • Sorting GridView Formed With Data Set

    - by nani
    Following Code is for Sorting GridView Formed With DataSet Source: http://www.highoncoding.com/Articles/176_Sorting_GridView_Manually_.aspx But it is not displaying any output. There is no problem in sql connection. I am unable to trace the error, please help me. Thank You. public partial class _Default : System.Web.UI.Page { private const string ASCENDING = " ASC"; private const string DESCENDING = " DESC"; private DataSet GetData() { SqlConnection cnn = new SqlConnection("Server=localhost;Database=Northwind;Trusted_Connection=True;"); SqlDataAdapter da = new SqlDataAdapter("SELECT TOP 5 firstname,lastname,hiredate FROM EMPLOYEES", cnn); DataSet ds = new DataSet(); da.Fill(ds); return ds; } public SortDirection GridViewSortDirection { get { if (ViewState["sortDirection"] == null) ViewState["sortDirection"] = SortDirection.Ascending; return (SortDirection)ViewState["sortDirection"]; } set { ViewState["sortDirection"] = value; } } protected void GridView1_Sorting(object sender, GridViewSortEventArgs e) { string sortExpression = e.SortExpression; if (GridViewSortDirection == SortDirection.Ascending) { GridViewSortDirection = SortDirection.Descending; SortGridView(sortExpression, DESCENDING); } else { GridViewSortDirection = SortDirection.Ascending; SortGridView(sortExpression, ASCENDING); } } private void SortGridView(string sortExpression, string direction) { // You can cache the DataTable for improving performance DataTable dt = GetData().Tables[0]; DataView dv = new DataView(dt); dv.Sort = sortExpression + direction; GridView1.DataSource = dv; GridView1.DataBind(); } } aspx page asp:GridView ID="GridView1" runat="server" AllowSorting="True" OnSorting="GridView1_Sorting" /asp:GridView

    Read the article

  • MATLAB Easter Egg Spy vs Spy

    - by Aqui1aZ3r0
    I heard that MATLAB 2009b or earlier had a fun function. When you typed spy in the console, you would get something like this: http://t2.gstatic.com/images?q=tbn:ANd9GcSHAgKz-y1HyPHcfKvBpYmZ02PWpe3ONMDat8psEr89K0VsP_ft However, now you have an image like this:http://undocumentedmatlab.com/images/spy2.png I'd like it if I could get the code of the original spy vs spy image FYI, there's a code, but has certain errors in it: c = [';@3EA4:aei7]ced.CFHE;4\T*Y,dL0,HOQQMJLJE9PX[[Q.ZF.\JTCA1dd' '-IorRPNMPIE-Y\R8[I8]SUDW2e+' '=4BGC;7imiag2IFOQLID8''XI.]K0"PD@l32UZhP//P988_WC,U+Z^Y\<2' '&lt;82BF>?8jnjbhLJGPRMJE9/YJ/L1#QMC$;;V[iv09QE99,XD.YB,[_]=3a' '9;CG?@9kokc2MKHQSOKF:0ZL0aM2$RNG%AAW\jw9E.FEE-_G8aG.d]_W5+' '?:CDH@A:lpld3NLIRTPLG=1[M1bN3%SOH4BBX]kx:J9LLL8H9bJ/+d_dX6,' '@;DEIAB;mqmePOMJSUQMJ2\N2cO4&TPP@HCY^lyDKEMMN9+I@+S8,+deY7^' '8@EFJBC<4rnfQPNPTVRNKB3]O3dP5''UQQCIDZ_mzEPFNNOE,RA,T9/,++\8_' '9A2G3CD=544gRQPQUWUOLE4^P4"Q6(VRRIJE[n{KQKOOPK-SE.W:F/,,]Z+' ':BDH4DE>655hSRQRVXVPMF5_Q5#R>)eSSJKF\ao0L.L-WUL.VF8XCH001_[,' ';3EI<eo ?766iTSRSWYWQNG6$R6''S?*fTTlLQ]bp1M/P.XVP8[H9]DIDA=]' '?4D3=FP@877jUTSTXZXROK7%S7(TF+gUUmMR^cq:N9Q8YZQ9_IcIJEBd_^' '@5E@GQA98b3VUTUY*YSPL8&T)UI,hVhnNS_dr;PE.9Z[RCaR?+JTFC?e+' '79FA?HRB:9c4WVUVZ+ZWQM=,WG*VJ-"gi4OTes-XH+bK.#hj@PUvftDRMEF,]UH,UB.TYVWX,e\' '9;ECAKTY< ;eWYXWX\:)YSOE.YI,cL/$ikCqV1guE/PFL-^XI-YG/WZWXY1+]' ':AFDBLUZ=jgY[ZYZ-<7[XQG0[K.eN1&"$K2u:iyO9.PN9-_K8aJ9_]]82[' '?CEFDNW\?khZ[Z[==8\YRH1\M/!O2''#%m31Bw0PE/QXE8+R9bS;da^]93\' '@2FGEOX]ali[][9(ZSL2]N0"P3($&n;2Cx1QN9--L9,SA+T< +d_:4,' 'A3GHFPY^bmj\^]\]??:)[TM3^O1%Q4)%''oA:D0:0OE.8ME-TE,XB,+da;5[' '643IGQZ_cnk]_^]^@@;5\UN4_P2&R6*&(3B;E1<1PN99NL8WF.^C/,a+bY6,' '7:F3HR[dol^_^AA<6]VO5Q3''S>+'');CBF:=:QOEEOO9_G8aH6/d,cZ[Y' '8;G4IS\aep4_a-BD=7''XP6aR4(T?,(5@DCHCC;RPFLPPDH9bJ70+0d\\Z' '9BH>JT^bf45ba.CE@8(YQ7#S5)UD-)?AEDIDDD/QKMVQJ+S?cSDF,1e]a,' ':C3?K4_cg5[acbaADFA92ZR8$T6*VE.*@JFEJEEE0.NNWTK,U@+TEG0?+_bX' ';2D@L9dh6\bdcbBEGD:3[S=)U7+cK/+CKGFLIKI9/OWZUL-VA,WIHB@,`cY']; i = double(c(:)-32); j = cumsum(diff([0; i])< =0) + 1; S = sparse(i,j,1)'; spy(S)

    Read the article

  • Mapping Cassandra Super Columns

    - by Laubstein
    Hello dudes, I guess everybody that played with Cassandra already read this article. I trying to create my schema on CassandraCli, but I am having a lot of problems, can someone guide me to the right way? I am trying to create a similar structure like the Comments column family from the article. In CassandraCli terminal I type: create column family posts with column_type = ‘Super’ and comparator = ‘AsciiType’ and subcomparator = TimeUUIDType; It works fine, there is no doc telling me that if I add a column_metadata attribute those will be for the super columns cause my column family is of type super, i can’t find if it is true so: create column family posts with column_type = ‘Super’ and comparator = ‘AsciiType’ and subcomparator = ‘TimeUUIDType’ and column_metadata = [{column_name:'body'}]; I am trying to create the same as the comment column family of the article, but when i try to populate set posts['post1'][timeuuid()][body] = ‘Hello I am Goku!’; i got: Invalid UUID string: body I guess because i chose the subcomparator be of type timeuuid and the body is a string, it should be a timeuuid, so HOW my columns inside the super column which is the type timeuuid could holds columns with string type names as the comments of the article are created? Thanks

    Read the article

  • C# and SQL - sub select from a table parameter

    - by Dr.HappyPants
    Below is a code snippet for passing a table as a parameter to a query that can be used in Sql Server 2008. I'm confused though about the "SELECT id.custid FROM @custids id". Why does it use id.custid and @custids id...? private static void datatable_example() { string [] custids = {"ALFKI", "BONAP", "CACTU", "FRANK"}; DataTable custid_list = new DataTable(); custid_list.Columns.Add("custid", typeof(String)); foreach (string custid in custids) { DataRow dr = custid_list.NewRow(); dr["custid"] = custid; custid_list.Rows.Add(dr); } using(SqlConnection cn = setup_connection()) { using(SqlCommand cmd = cn.CreateCommand()) { cmd.CommandText = @"SELECT C.CustomerID, C.CompanyName FROM Northwind.dbo.Customers C WHERE C.CustomerID IN (SELECT id.custid FROM @custids id)"; cmd.CommandType = CommandType.Text; cmd.Parameters.Add("@custids", SqlDbType.Structured); cmd.Parameters["@custids"].Direction = ParameterDirection.Input; cmd.Parameters["@custids"].TypeName = "custid_list_tbltype"; cmd.Parameters["@custids"].Value = custid_list; using (SqlDataAdapter da = new SqlDataAdapter(cmd)) using (DataSet ds = new DataSet()) { da.Fill(ds); PrintDataSet(ds); } } }

    Read the article

  • Why is cell phone software is still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up?

    Read the article

  • Noob question about a statement in a Java program

    - by happysoul
    I am beginner to java and was trying out this code puzzle from the book head first java which I solved as follows and got the output correct :D class DrumKit { boolean topHat=true; boolean snare=true; void playSnare() { System.out.println("bang bang ba-bang"); } void playTopHat() { System.out.println("ding ding da-ding"); } } public class DrumKitTestDriver { public static void main(String[] args) { DrumKit d =new DrumKit(); if(d.snare==true) { d.playSnare(); } d.playTopHat(); } } Output is :: bang bang ba-bang ding ding da-ding Now the problem is that in that code puzzle one code snippet is left that I did not include..it's as follows d.snare=false; Even though I did not write it , I got the output like the book. I am wondering why is there need for us to set it's value as false even when we know the code is gonna run without it too !?? I am wondering what the coder had in mind ..I mean what could be the possible future use and motive behind doing this ? I am sorry if it's a dumb question. I just wanna know why or why not to include that particular statement ? It's not like there's a loop or something that we need to come out of. Why is that statement there ?

    Read the article

  • sorting filtered data in asp.net listview

    - by user791345
    I've created a listview that's filled up with a list of guitars from the database on page load like so: protected void Page_Load(object sender, EventArgs e) { SqlConnection con = new SqlConnection(WebConfigurationManager.ConnectionStrings["GuitarsLTDBConnectionString"].ToString()); SqlCommand cmd = new SqlCommand("SELECT * FROM Guitars", con); SqlDataAdapter da = new SqlDataAdapter(cmd.CommandText, con); DataTable dt = new DataTable(); da.Fill(dt); lvGuitars.DataSource = dt; lvGuitars.DataBind(); } The following code filters that list of guitars by a certain Make when the user checks the checkbox corresponding to that make protected void chkOrgs_SelectedIndexChanged(object sender, EventArgs e) { DataTable dt = (DataTable)lvGuitars.DataSource; DataView dv = new DataView(dt); if (chkOrgs.SelectedValue == "Gibson") { dv.RowFilter = "Make = 'Gibson' OR Make='Fender'"; } lvGuitars.DataSource = dv.ToTable(); lvGuitars.DataBind(); } Now, what I want to do is be able to sort the latest data that is present within the listview. Meaning, if sort is clicked before filtering, the it should sort all data. If sort is clicked after filtering, it should sort the filtered data. I'm using the following code, which is triggered upon a LinkButton click protected void lnkSortResults_Click(object sender, EventArgs e) { DataTable dt = (DataTable)lvGuitars.DataSource; DataView dv = new DataView(dt); dv.Sort = "Make ASC"; lvGuitars.DataSource = dv.ToTable(); lvGuitars.DataBind(); } The problem is that all the data that the listview was loaded with before any filtering is sorted, and not the latest filtered data. How can I change this code so that the latest data available in the listview is the one that's sorted? Thanks

    Read the article

  • My java.util.Scanner won't work

    - by Kevin Steen Hansen
    Hello Stackoverflow my code is getting this error: Skriv din alder herunder og tryk enter: Exception in thread "main" java.util.NoSuchElementException at java.util.Scanner.throwFor(Scanner.java:907) at java.util.Scanner.next(Scanner.java:1530) at java.util.Scanner.nextInt(Scanner.java:2160) at java.util.Scanner.nextInt(Scanner.java:2119) at Tasteturindtastning.main(Tasteturindtastning.java:20) [Finished in 1.7s with exit code 1] Adn my code is: // Starter java som man plejer, læs i HejVerden.java public class Tasteturindtastning { public static void main(String[] arg) { /* Jeg skal nu angive en variable, men jeg kan ikke bestemme denne variable * Da jeg ønsker at indtastningen fra dette tastetur skal være variablen. * I stedet for int og double bruger jeg så java.util.Scanner, som aflæser * brugerens indtastninger. */ java.util.Scanner tastetur = new java.util.Scanner(System.in); // Printer en opgave/spørgsmål til brugeren System.out.println("Skriv din alder herunder og tryk enter:"); int alder; // Angiver et variablenavn alder = tastetur.nextInt(); // Angiver variablen med værdien fra indtastningen /* Herunder gør jeg brug af et if statement der tjekker værdien for * variablen alder, og ser om den er lig med eller højere end 18, og hvis * dette er tilfældet, så udprinter den en sætning */ if (alder >= 18) System.out.println("Du er myndig, da du er " + alder + " år gammel"); // Printes hvis han er 18 eller ældre } } Can snyone tell me what is wrong?

    Read the article

  • Problem with WHERE columnName = Data in MySQL query in C#

    - by Ryan Sullivan
    I have a C# webservice on a Windows Server that I am interfacing with on a linux server with PHP. The PHP grabs information from the database and then the page offers a "more information" button which then calls the webservice and passes in the name field of the record as a parameter. So i am using a WHERE statement in my query so I only pull the extra fields for that record. I am getting the error: System.Data.SqlClient.SqlException:Invalid column name '42' Where 42 is the value from the name field from the database. my query is string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; I do not know if it is a problem with my query or something is wrong with the database, but here is the rest of my code for reference. NOTE: this all works perfectly when I grab all of the records, but I only want to grab the record that I ask my webservice for. public class ktvService : System.Web.Services.WebService { [WebMethod] public string moreInfo(string show) { string connectionStr = "MyConnectionString"; string selectStr = "SELECT name, castNotes, triviaNotes FROM tableName WHERE name =\"" + show + "\""; SqlConnection conn = new SqlConnection(connectionStr); SqlDataAdapter da = new SqlDataAdapter(selectStr, conn); DataSet ds = new DataSet(); da.Fill(ds, "tableName"); DataTable dt = ds.Tables["tableName"]; DataRow theShow = dt.Rows[0]; string response = "Name: " + theShow["name"].ToString() + "Cast: " + theShow["castNotes"].ToString() + " Trivia: " + theShow["triviaNotes"].ToString(); return response; } }

    Read the article

  • SSL over TDS, SQL Server 2005 Express

    - by reuvenab
    I capture packets sent/received by Win Xp machine when connecting to SQL Server 2005 Express using TLS encryption. Server and Client exchange Hello messages Server and Client send ChangeCipherSpec message Then Server and Client server send strange message that is not described in TLS protocol What is the message and if SSL over TDS is standard compliant at all? Server side capture: 16 **SSL Handshake** 03 01 00 4a 02 ServerHello 00 00 46 03 01 4b dd 68 59 GMT 33 13 37 98 10 5d 57 9d ff 71 70 dc d6 6f 9e 2c Random[00..13] cb 96 c0 2e b3 2f 9b 74 67 05 cc 96 Random[14..27] 20 72 26 00 00 0f db 7f d9 b0 51 c2 4f cd 81 4c Session ID 3f e3 d2 d1 da 55 c0 fe 9b 56 b7 6f 70 86 fe bb Session ID 54 Session ID 00 04 Cipher Suite 00 Compression 14 03 01 00 01 01 **ChangeCipherSpec** 16 03 01 ???? Finished ??? 00 20 d0 da cc c4 36 11 43 ff 22 25 8a e1 38 2b ???? ??? 71 ce f3 59 9e 35 b0 be b2 4b 1d c5 21 21 ce 41 ???? ??? 8e 24

    Read the article

  • Export GridView to Excel (not working)

    - by Chiramisu
    I've spent the last two days trying to get some bloody data to export to Excel. After much research I determined that the best and most common way is using HttpResponse headers as shown in my code below. After stepping through countless times in debug mode, I have confirmed that the data is in fact there and both filtered and sorted the way I want it. However, it does not download as an Excel file, or do anything at all for that matter. I suspect this may have something to do with my UpdatePanel or perhaps the ImageButton not posting back properly, but I'm not sure. What am I doing wrong? Please help me to debug this issue. I will be eternally grateful. Thank you. :) Markup <asp:UpdatePanel ID="statusUpdatePanel" runat="server" UpdateMode="Conditional"> <Triggers> <asp:AsyncPostBackTrigger ControlID="btnExportXLS" EventName="Click" /> </Triggers> <ContentTemplate> <asp:GridView ID="GridView1" runat="server" AllowPaging="True" PageSize="10" AllowSorting="True" DataSourceID="GridView1SDS" DataKeyNames="ID"> </asp:GridView> <span><asp:ImageButton ID="btnExportXLS" runat="server" /></span> </ContentTemplate> </asp:UpdatePanel> Codebehind Protected Sub ExportToExcel() Handles btnExportXLS.Click Dim dt As New DataTable() Dim da As New SqlDataAdapter(SelectCommand, ConnectionString) da.Fill(dt) Dim gv As New GridView() gv.DataSource = dt gv.DataBind() Dim frm As HtmlForm = New HtmlForm() frm.Controls.Add(gv) Dim sw As New IO.StringWriter() Dim hw As New System.Web.UI.HtmlTextWriter(sw) Response.ContentType = "application/vnd.ms-excel" Response.AddHeader("content-disposition", "attachment;filename=Report.xls") Response.Charset = String.Empty gv.RenderControl(hw) Response.Write(sw.ToString()) Response.End() End Sub

    Read the article

  • Laravel - Mail class Exception

    - by Christian Giupponi
    I need to send email within my app and this is my code: if( $agent->save() ) { //Preparo la mail da inviare con i dati di login $data = [ 'nome' => $input['nome'], 'cognome' => $input['cognome'], 'email' => $input['email'], 'password' => $input['password'] ]; //ATTENZIONE //Questo è da rimuovere in produzione, finge di inviare la mail Mail::pretend(); //Recuero il template e passo alla funzione i dati Mail::send('emails.agents.registration', $data, function($message) use ($data) { $message->to( $data['email'], $data['nome'].' '.$data['cognome'] )->subject('Benvenuto!'); }); return Redirect::action('admin.agents.index')->with('positive_flash_message', 'Agente inserito correttamente.'); } As you can see I have use the Mail::pretend to avoid the email send in development, the problem is that I get this error every time I try to send an email: Undefined property: Illuminate\Mail\Message::$email (View: /var/www/progetti/app/views/emails/agents/registration.blade.php) nd this is my blade view: Email: {{ $message->email }} Password: {{ $message->password }} What's wrong with $message?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >