Search Results

Search found 29194 results on 1168 pages for 'non null'.

Page 260/1168 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Issues with ACTION_HEADSET_PLUG broadcast in Android

    - by Denis M
    I've tried these phones: Moto Backflip 1.5, Nexus One 2.1 Basically I register BroadcastReceiver to get ACTION_HEADSET_PLUG broadcast and look on 3 extras that come in intent: state name microphone Here is the description from API: * state - 0 for unplugged, 1 for plugged. * name - Headset type, human readable string * microphone - 1 if headset has a microphone, 0 otherwise Issue #1: Broadcast comes when activity is started (not expected), when screen rotation happens (not expected) and when headset/headphones plugged/unplugged (expected). Issue #2: Backflip phone sends null for state + microphone, 'No Device' as name when headset/headphones unplugged, and sends null for state + microphone, 'Stereo HeadSet'/'Stereo HeadPhones' as name when headset/headphones plugged. Nexus even worse, it always sends null for state + microphone, 'Headset' as name when headset/headphones plugged or unplugged. Question: How it can be explained that API is broken so much on both 1.5 and 2.1 versions and different devices, manufactures?

    Read the article

  • Convert SQLITE SQL dump file to POSTGRESQL

    - by DevX
    I've been doing development using SQLITE database with production in POSTGRESQL. I just updated my local database with a huge amount of data and need to transfer a specific table to the production database. SQLITE outputs a table dump in the following format: BEGIN TRANSACTION; CREATE TABLE "courses_school" ("id" integer PRIMARY KEY, "department_count" integer NOT NULL DEFAULT 0, "the_id" integer UNIQUE, "school_name" varchar(150), "slug" varchar(50)); INSERT INTO "courses_school" VALUES(1,168,213,'TEST Name A',NULL); INSERT INTO "courses_school" VALUES(2,0,656,'TEST Name B',NULL); .... COMMIT; How do I convert the above into a POSTGRESQL compatible dump file that I can import into my production server?

    Read the article

  • Assert a good practice or not ?

    - by rkenshin
    Is it a good practice to use Assert for function parameters to enforce their validity. I was going through the source code of Spring Framework and I noticed that they use Assert.notNull a lot. Here's an example public static ParsedSql parseSqlStatement(String sql) { Assert.notNull(sql, "SQL must not be null");} Here's Another one public NamedParameterJdbcTemplate(DataSource dataSource) { Assert.notNull(dataSource, "The [dataSource] argument cannot be null."); this .classicJdbcTemplate = new JdbcTemplate(dataSource); } public NamedParameterJdbcTemplate(JdbcOperations classicJdbcTemplate) { Assert.notNull(classicJdbcTemplate, "JdbcTemplate must not be null"); this .classicJdbcTemplate = classicJdbcTemplate; } Thank you

    Read the article

  • Readdirectorychanges not working with big file

    - by sanky
    i was working with readdirectorychangesW till i encountered an unwanted behavior.below is short reference the way i am using my readdirectorychanges func. The behavior is that in case a big file in GBs is copied to the watched directory i get the notification immediately and thereby i start doing my operation according to the notification i receive ,but the file IO ( copy operation is still pending) and that results in unxpected state. Can anyone please suggest a way to synchronize this operation. I want the copy operation(that gave me notification) to get completed first then i want to do my processing. while(ReadDirectoryChangesW( hDir, pBuffer, nBufSize, FALSE, FILE_NOTIFY_CHANGE_FILE_NAME, &BytesReturned, NULL, NULL )) { pBufferCurrent = pBuffer; while(pBufferCurrent) { switch(pBufferCurrent->Action) { case FILE_ACTION_ADDED: break; default: break; } if (pBufferCurrent->NextEntryOffset) pBufferCurrent = (FILE_NOTIFY_INFORMATION*)(((oef_u8_t*)pBufferCurrent) + pBufferCurrent->NextEntryOffset); else pBufferCurrent = NULL; }

    Read the article

  • UpdatePanel, Repeater, DataBinding Problem

    - by Gordon Carpenter-Thompson
    In a user control, I've got a Repeater inside of an UpdatePanel (which id displayed inside of a ModalPopupExtender. The Repeater is databound using an array list of MyDTO objects. There are two buttons for each Item in the list. Upon binding the ImageURL and CommandArgument are set. This code works fine the first time around but the CommandArgument is wrong thereafter. It seems like the display is updated correctly but the DTO isn't and the CommandArgument sent is the one that has just been removed. Can anybody spot any problems with the code? ASCX <asp:UpdatePanel ID="ViewDataDetail" runat="server" ChildrenAsTriggers="true"> <Triggers> <asp:PostBackTrigger ControlID="ViewDataCloseButton" /> <asp:AsyncPostBackTrigger ControlID="DataRepeater" /> </Triggers> <ContentTemplate> <table width="100%" id="DataResults"> <asp:Repeater ID="DataRepeater" runat="server" OnItemCommand="DataRepeater_ItemCommand" OnItemDataBound="DataRepeater_ItemDataBound"> <HeaderTemplate> <tr> <th><b>Name</b></th> <th><b>&nbsp;</b></th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td> <b><%#((MyDTO)Container.DataItem).Name%></b> </td> <td> <asp:ImageButton CausesValidation="false" ID="DeleteData" CommandName="Delete" runat="server" /> <asp:ImageButton CausesValidation="false" ID="RunData" CommandName="Run" runat="server" /> </td> </tr> <tr> <td colspan="2"> <table> <tr> <td>Description : </td> <td><%#((MyDTO)Container.DataItem).Description%></td> </tr> <tr> <td>Search Text : </td> <td><%#((MyDTO)Container.DataItem).Text%></td> </tr> </table> </td> </tr> </ItemTemplate> </asp:Repeater> </table> </ContentTemplate> </asp:UpdatePanel> Code-Behind public DeleteData DeleteDataDelegate; public RetrieveData PopulateDataDelegate; public delegate ArrayList RetrieveData(); public delegate void DeleteData(String sData); protected void Page_Load(object sender, EventArgs e) { //load the initial data.. if (!Page.IsPostBack) { if (PopulateDataDelegate != null) { this.DataRepeater.DataSource = this.PopulateDataDelegate(); this.DataRepeater.DataBind(); } } } protected void DataRepeater_ItemCommand(object source, RepeaterCommandEventArgs e) { if (e.CommandName == "Delete") { if (DeleteDataDelegate != null) { DeleteDataDelegate((String)e.CommandArgument); BindDataToRepeater(); } } else if (e.CommandName == "Run") { String sRunning = (String)e.CommandArgument; this.ViewDataModalPopupExtender.Hide(); } } protected void DataRepeater_ItemDataBound(object source, RepeaterItemEventArgs e) { RepeaterItem item = e.Item; if (item != null && item.DataItem != null) { MyDTO oQuery = (MyDTO)item.DataItem; ImageButton oDeleteControl = (ImageButton) item.FindControl("DeleteData"); ImageButton oRunControl = (ImageButton)item.FindControl("RunData"); if (oDeleteControl != null && oRunControl !=null) { oRunControl.ImageUrl = "button_expand.gif"; oRunControl.CommandArgument = "MyID"; oDeleteControl.ImageUrl = "btn_remove.gif"; oDeleteControl.CommandArgument = "MyID"; } } } public void BindDataToRepeater() { this.DataRepeater.DataSource = this.PopulateDataDelegate(); this.DataRepeater.DataBind(); } public void ShowModal(object sender, EventArgs e) { BindDataToRepeater(); this.ViewDataModalPopupExtender.Show(); }

    Read the article

  • How do you handle the fetchxml result data?

    - by Luke Baulch
    I have avoided working with fetchxml as I have been unsure the best way to handle the result data after calling crmService.Fetch(fetchXml). In a couple of situations, I have used an XDocument with LINQ to retrieve the data from this data structure, such as: XDocument resultset = XDocument.Parse(_service.Fetch(fetchXml)); if (resultset.Root == null || !resultset.Root.Elements("result").Any()) { return; } foreach (var displayItem in resultset.Root.Elements("result").Select(item => item.Element(displayAttributeName)).Distinct()) { if (displayItem!= null && displayItem.Value != null) { dropDownList.Items.Add(displayItem.Value); } } What is the best way to handle fetchxml result data, so that it can be easily used. Applications such as passing these records into an ASP.NET datagrid would be quite useful.

    Read the article

  • NodeJS and node-mongodb-native

    - by w1nk
    Just getting started with node, and trying to get the mongo driver to work. I've got my connection set up, and oddly I can insert things just fine, however calling find on a collection produces craziness. var db = new mongo.Db('things', new mongo.Server('192.168.2.6',mongo.Connection.DEFAULT_PORT, {}), {}); db.open(function(err, db) { db.collection('things', function(err, collection) { // collection.insert(row); collection.find({}, null, function(err, cursor) { cursor.each(function(err, doc) { sys.puts(sys.inspect(doc,true)); }); }); }); }); If I uncomment the insert and comment out the find, it works a treat. The inverse unfortunately doesn't hold, I receive this error: collection.find({}, null, function(err, cursor) { ^ TypeError: Cannot call method 'find' of null I'm sure I'm doing something silly, but for the life of me I can't find it...

    Read the article

  • Sending SMS programmatically in 1.5 on CDMA device

    - by Justin
    I am developing an application that relies heavily on sending SMS programmatically. I followed the examples released after 1.6 that demonstrate how to use an abstract class to implement sending for 1.5 and 1.6+. I started getting complaints from some users about how it appears as though SMS should be sent but they are in fact not. It took be a while to realize what was going on, the one 1.5 test device I have is GSM. Of course it must be because the Sprint Hero is CDMA (running 1.5). Regardless of message size I use the general form of: divideMessage() and sendMultipartTextMessage(destinationAddress, null, parts, null, null) How can I successfully send an SMS in this case? Can I call sendTextMessage() a number of times? Also, I tried unsuccessfully to find the source for the Hero's Messenger/Conversations package or equivalent, if anyone knows where to find that that would be great.

    Read the article

  • "Object reference not set to an instance of an object": why can't .NET show more details?

    - by Simon Chadwick
    "Object reference not set to an instance of an object" This is probably one of the most common run-time errors in .NET. Although the System.Exception has a stack trace, why does the exception not also show the name of the object reference field, or at least its type? Over the course of a year I spend hours sifting through stack traces (often in code I did not write), hoping there is a line number from a ".pdb" file, then finding the line in the code, and even then it is often not obvious which reference on the line was null. Having the name of the reference field would be very convenient. If System.ArgumentNullException instances can show the name of the method parameter ("Value cannot be null. Parameter name: value"), then surely System.NullReferenceException instances could include the name of the null field (or its containing collection).

    Read the article

  • Issue with CGEventTapCreate() call.

    - by Dheeraj
    Hi All, I'm trying to register for global key events using this code : void function() { CFMachPortRef keyUpEventTap = CGEventTapCreate(kCGHIDEventTap,kCGHeadInsertEventTap,kCGEventTapOptionListenOnly,CGEventMaskBit(kCGEventKeyUp),&keyUpCallback,NULL); CFRunLoopSourceRef keyUpRunLoopSourceRef = CFMachPortCreateRunLoopSource(NULL, keyUpEventTap, 0); CFRelease(keyUpEventTap); CFRunLoopAddSource(CFRunLoopGetCurrent(), keyUpRunLoopSourceRef, kCFRunLoopDefaultMode); CFRelease(keyUpRunLoopSourceRef); } The application crashes while executing CFMachPortCreateRunLoopSource() call. I think the crash is because of CGEventMaskBit(kCGEventKeyUp) when I create an event tap. But if I create event tap using CGEventTapCreate(kCGHIDEventTap,kCGHeadInsertEventTap,kCGEventTapOptionListenOnly,CGEventMaskBit(kCGEventFlagsChanged),&keyUpCallback,NULL), the application works fine. It does not crash. I'm getting callbacks when any modifier key is pressed. But I need to get callbacks for delete key pressed. Any ideas? Thanks, Dheeraj.

    Read the article

  • Sharepoint listsService.updateList method clarification

    - by cyrix86
    I've seen an example here: msdn but it's a little confusing. So if I have a list definition with a field called "CustomField" and I want to update the "ShowField" attribute of this field to be true then I would do this: XmlNode listNode = listService.GetList("MyList"); string version = listNode.Attributes["Version"].Value; string guid = listNode.Attributes["Name"].Value; XmlDocument xmlDoc = new XmlDocument(); XmlElement updateFields = xmlDoc.CreateElement("Fields"); string fieldXml = @"<Method ID="1"><Field Name="CustomField" ShowField="true" /></Method>"; updateFields.InnerXml = fieldXml; XmlNode result = listService.UpdateList(guid, null, null, updateFields, null, version); I'm confused because it would seem that you would need to provide a field element to indicate what field to update and then a value element to specify the new value. Could someone clarify this please?

    Read the article

  • Behaviour of insertion trigger when defining autoincrement in Oracle

    - by Genba
    I have been looking for a way to define an autoincrement data type in Oracle and have found these questions on Stack Overflow: Autoincrement in Oracle Autoincrement Primary key in Oracle database The way to use autoincrement types consists in defining a sequence and a trigger to make insertion transparent, where the insertion trigger looks so: create trigger mytable_trg before insert on mytable for each row when (new.id is null) begin select myseq.nextval into :new.id from dual; end; I have some doubts about the behaviour of this trigger: What does this trigger do when the supplied value of "id" is different from NULL? What does the colon before "new" mean? I want the trigger to insert the new row with the next value of the sequence as ID whatever the supplied value of "new.id" is. I imagine that the WHEN statement makes the trigger to only insert the new row if the supplied ID is NULL (and it will not insert, or will fail, otherwise). Could I just remove the WHEN statement in order for the trigger to always insert using the next value of the sequence?

    Read the article

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • ROO: how to create composit primary key in Entity

    - by Paul
    Hi, What can I do if I need to create entity for a table in production DB (Oracle 10g) with composite primary key. For example: [CODE] CREATE TABLE TACCOUNT ( BRANCHID NUMBER(3) NOT NULL, ACC VARCHAR2(18 BYTE) NOT NULL, DATE_OPEN DATE NOT NULL, DATE_CLOSE DATE, NOTE VARCHAR2(38 BYTE) ); CREATE UNIQUE INDEX PK_TACCOUNT ON TACCOUNT (BRANCHID, ACC); I don't want to change the structure of this table. Is it possible to create an "id" field using roo commands? I use Spring Roo 1.0.2.RELEASE [rev 638] Paul

    Read the article

  • Updating records with their subordinates via CTE or subquery

    - by Mike Jolley
    Let's say I have a table with the following columns: Employees Table employeeID int employeeName varchar(50) managerID int totalOrganization int managerID is referential to employeeID. totalOrganization is currently 0 for all records. I'd like to update totalOrganization on each row to the total number of employees under them. So with the following records: employeeID employeeName managerID totalOrganization 1 John Cruz NULL 0 2 Mark Russell 1 0 3 Alice Johnson 1 0 4 Juan Valdez 3 0 The query should update the totalOrganizations to: employeeID employeeName managerID totalOrganization 1 John Cruz NULL 3 2 Mark Russell 1 0 3 Alice Johnson 1 1 4 Juan Valdez 3 0 I know I can get somewhat of an org. chart using the following CTE: WITH OrgChart (employeeID, employeeName,managerID,level) AS ( SELECT employeeID,employeeName,0 as managerID,0 AS Level FROM Employees WHERE managerID IS NULL UNION ALL SELECT Employees.employeeID,Employees.employeeName,Employees.managerID,Level + 1 FROM Employees INNER JOIN OrgChart ON Employees.managerID = OrgChart.employeeID ) SELECT employeeID,employeeName,managerID, level FROM OrgChart; Is there any way to update the Employees table using a stored procedure rather than building some routine outside of SQL to parse through the data?

    Read the article

  • Turning an NSImage* into a CGImageRef?

    - by Brian Postow
    Is there an easy way to do this that works in 10.5? In 10.6 I can use nsImage CGImageForProposedRect: NULL context: NULL hints: NULL If I'm not using 1b black and white images (Like Group 4 TIFF), I can use bitmaps, but cgbitmaps seem to not like that setup... Is there a general way of doing this? I need to do this because I have an IKImageView that seems to only want to add CGImages, but all I've got are NSImages. Currently, I'm using a private setImage:(NSImage*) method that I'd REALLY REALLY rather not be using...

    Read the article

  • MapViewOfFile shared between 32bit and 64bit processes

    - by MK
    Hi, I'm trying to use MapViewOfFile in a 64 bit process on a file that is already mapped to memory of another 32 bit process. It fails and gives me an "access denied" error. Is this a known Windows limitation or am I doing something wrong? Same code works fine with 2 32bit processes. The code sort of looks like this: hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, szShmName); if (NULL == hMapFile) { /* failed to open - create new (this happens in the 32 bit app) */ SECURITY_ATTRIBUTES sa; sa.nLength = sizeof(SECURITY_ATTRIBUTES); sa.bInheritHandle = FALSE; /* give access to members of administrators group */ BOOL success = ConvertStringSecurityDescriptorToSecurityDescriptor( "D:(A;OICI;GA;;;BA)", SDDL_REVISION_1, &(sa.lpSecurityDescriptor), NULL); HANDLE hShmFile = CreateFile(FILE_FAXCOM_SHM, FILE_ALL_ACCESS, 0, &sa, OPEN_ALWAYS, 0, NULL); hMapFile = CreateFileMapping(hShmFile, &sa, PAGE_READWRITE, 0, SHM_SIZE, szShmName); CloseHandle(hShmFile); } // this one fails in 64 bit app pShm = MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, SHM_SIZE);

    Read the article

  • LINQ Expression not evaluating correctly

    - by WedTM
    I have the following LINQ expression that's not returning the appropriate response var query = from quote in db.Quotes where quote.QuoteStatus == "Estimating" || quote.QuoteStatus == "Rejected" from emp in db.Employees where emp.EmployeeID == quote.EmployeeID orderby quote.QuoteID descending select new { quote.QuoteID, quote.DateDue, Company = quote.Company.CompanyName, Attachments = quote.Attachments.Count, Employee = emp.FullName, Estimator = (quote.EstimatorID != null && quote.EstimatorID != String.Empty) ? db.Employees.Single (c => c.EmployeeID == quote.EstimatorID).FullName : "Unassigned", Status = quote.QuoteStatus, Priority = quote.Priority }; The problem lies in the Estimator = (quote.EstimatorID != null && quote.EstimatorID != String.Empty) ? db.Employees.Single(c => c.EmployeeID == quote.EstimatorID).FullName : "Unassigned" part. I NEVER get it to evalueate to "Unassigned", on the ones that are supposed to, it just returns null. Have I written this wrong?

    Read the article

  • IPhone SDK Default NSUserDefaults...

    - by Harry
    I have set up the user defaults to record a integer for a UISlider, the problem is that, if the user has only just installed the app then the integer is zero or NULL. Is there a way to detect if it = NULL with a integer? heres my code i have so far: -(void)awakeFromNib { NSUserDefaults *prefs = [NSUserDefaults standardUserDefaults]; senset = [prefs integerForKey:@"senset"]; senset = sensitivity.value; } - (IBAction) setSens { senset = sensitivity.value; } - (IBAction) goBackopt { { NSUserDefaults *prefs = [NSUserDefaults standardUserDefaults]; [prefs setInteger:senset forKey:@"senset"]; [prefs synchronize]; } } senset is the integer. i have tried this code, and it sets it to 25 on the first time, but then if i try and overwrite it, it wont work and keeps setting it to 25. :( if ( [prefs integerForKey:@"senset"] == NULL ) { senset = 25; } Please help :P Harry

    Read the article

  • How to extend the Smarty class right

    - by Evolutio
    I have a Problem with Smarty. I have made this class: <?php require_once(INCLUDE_PATH.'smarty/Smarty.class.php'); class IndexPage extends Smarty { public $templateName = 'index.tpl'; public function __construct() { parent::__construct(); $this->showTemplate(); } public function showTemplate() { self::display('index.tpl'); } } ?> But why it doesn't works? Here is the Error: Fatal error: Uncaught exception 'SmartyException' with message 'Unable to load template file 'index.tpl'' in D:\xampp\htdocs\aiondb\lib\smarty\sysplugins\smarty_internal_templatebase.php:127 Stack trace: #0 D:\xampp\htdocs\aiondb\lib\smarty\sysplugins\smarty_internal_templatebase.php(374): Smarty_Internal_TemplateBase->fetch('index.tpl', NULL, NULL, NULL, true) #1 D:\xampp\htdocs\aiondb\lib\classes\IndexPage.class.php(14): Smarty_Internal_TemplateBase->display('index.tpl') #2 D:\xampp\htdocs\aiondb\lib\classes\IndexPage.class.php(10): IndexPage->showTemplate() #3 D:\xampp\htdocs\aiondb\index.php(3): IndexPage->__construct() #4 {main} thrown in D:\xampp\htdocs\aiondb\lib\smarty\sysplugins\smarty_internal_templatebase.php on line 127

    Read the article

  • How do i create a table dynamically with dynamic datatype from a PL/SQL procedure

    - by Swapna
    CREATE OR REPLACE PROCEDURE p_create_dynamic_table IS v_qry_str VARCHAR2 (100); v_data_type VARCHAR2 (30); BEGIN SELECT data_type || '(' || data_length || ')' INTO v_data_type FROM all_tab_columns WHERE table_name = 'TEST1' AND column_name = 'ZIP'; FOR sql_stmt IN (SELECT * FROM test1 WHERE zip IS NOT NULL) LOOP IF v_qry_str IS NOT NULL THEN v_qry_str := v_qry_str || ',' || 'zip_' || sql_stmt.zip || ' ' || v_data_type; ELSE v_qry_str := 'zip_' || sql_stmt.zip || ' ' || v_data_type; END IF; END LOOP; IF v_qry_str IS NOT NULL THEN v_qry_str := 'create table test2 ( ' || v_qry_str || ' )'; END IF; EXECUTE IMMEDIATE v_qry_str; COMMIT; END p_create_dynamic_table; Is there any better way of doing this ?

    Read the article

  • PHP5 what's wrong with my syntax?

    - by myxospsm
    Hey, I've never developed before and I'm a bit puzzled, what is wrong with my syntax here? private static $instance; //holder of Mongo_Wrapper public $connected = true; private $mongo = null; // The mongo connection affiliation private $database = null; // The database we are working on with this function: public function mongo_connect($db_name) { if (! self::connected) { $this->mongo = new Mongo; //TODO: error handle this whole sharade: throw new Kohana_Database_Exception('Cant connect', NULL, 503); $this->connected = true; } $this->database = $this->mongo->$db_name; //set the database we are working on return $connected; } I'm sorry, wmd-editor is giving me hell posting the code. Thank you!

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • Flex AdvancedDataGrid with Grouping, how do I get objects to appear under first GroupingField if the

    - by shadenite
    I am using an AdvancedDataGrid with two GroupingFields. The dataProvider has a list of objects with these two field values, but occasionally the second field value can be null. When it loads, the AdvancedDataGrid UI has a root folder (first GroupingField) and some additional subfolders (second GroupingField). This is all good. However, the objects with a null value for the second GroupingField, just get placed in a subfolder with no label. I want the objects with a null second GroupingField value to appear as leaf nodes beneath the root folder (first GroupingField) minus the blank subfolder. A good way to picture this would be a file explorer. Is there a good way to do this? Make the folder icon disappear maybe after expanding this node through actionscript? ParentFolder SubFolder Leaf Object Leaf Object SubFolder Leaf Object Leaf Object Leaf Object

    Read the article

  • Using nullable types in C#

    - by Martin Brown
    I'm just interested in people's opinions. When using nullable types in C# what is the best practice way to test for null: bool isNull = (i == null); or bool isNull = !i.HasValue; Also when assigning to a non-null type is this: long? i = 1; long j = (long)i; better than: long? i = 1; long j = i.Value;

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >