Search Results

Search found 6728 results on 270 pages for 'srv record'.

Page 44/270 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Linq to SQL EntitySet Records causing duplicate insertion

    - by Savvas Sopiadis
    In a WPF application i'm using Linq to SQL in a multi tier application. (This is an archailogy photo filing application), so every excavation has its corresponding Pictures, thus a one-to-many relationship. This relationship is correctly created by SQLMetal (which i'm using to create the POCOs). So here is the situation i 'm having trouble with: Saving changes (either of new or altered objects) is done through UnitOfWork() pattern this way: using (IUnitOfWork unitOfWork = UnitOfWork.Begin()) { //if this is a new record if (SelectedExcavation.excavation.ExcavationId == 0) { IsNewRecord = true; excavService.Add(SelectedExcavation.excavation); } //send the actual changes to the dbms unitOfWork.Commit(); } Everything works fine! BUTTTT!!! Whenever a record gets updated which has (already at least one ) corresponding Picture Record: 1) a new Excavation Record is inserted 2) the current Excavation Record gets updated 3) the previous Picture Record gets its Id changed to the newly created ExcavationId What is going on under the hood? Does Linq to SQL not handle such simple update scenarios? Thanks in advance

    Read the article

  • php foreach visible on page

    - by Hintswen
    In my PHP code I have this: $filename = 'data.xml'; $xml = file_get_contents($filename); $data = simplexml_load_string($xml); $variable = ""; foreach ($data->file_info as $record) { $id1 = $record['id1']; $id2 = $record['id2']; } And it works perfectly fine on the web server, but when trying to view it locally (using xampp) I get the following output at the top of my pgae: file_info as $record) { $id1 = $record['id1']; $id2 = $record['id2']; } (followed by another 100 or so lines of PHP) Not sure if it would make a difference, the web server it works on is running linux, and I am trying to view it on windows using xampp)

    Read the article

  • problem parsing JSON Strings

    - by blacktooth
    var records = JSON.parse(JsonString); for(var x=0;x<records.result.length;x++) { var record = records.result[x]; ht_text+="<b><p>"+(x+1)+" " +record.EMPID+" " +record.LOCNAME+" " +record.DEPTNAME+" " +record.CUSTNAME +"<br/><br/><div class='slide'>" +record.REPORT +"</div></p></b><br/>"; } The above code works fine when the JsonString contains an array of entities but fails for single entity. result is not identified as an array! Whats wrong with it? http://pastebin.com/hgyWw5hd

    Read the article

  • Rtti for Variant Records

    - by Coco
    I try to write a kind of object/record serializer with Delphi 2010 and wonder if there is a way to detect, if a record is a variant record. E.g. the TRect record as defined in Types.pas: TRect = record case Integer of 0: (Left, Top, Right, Bottom: Longint); 1: (TopLeft, BottomRight: TPoint); end; As my serializer should work recursively on my data structures, it will descent on the TPoint records and generate redundant information in my serialized file. Is there a way to avoid this, by getting detailed information on the record?

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Could InAppSettingsKit do this, or is there another library?

    - by cannyboy
    I'm trying to implement a system whereby the user is initially presented with a single tablecell, in a uitableview (grouped style), within a uinavigationview. --------------- + | add record | --------------- When they click on the cell, they are pushed onto a new screen where they fill in a few textviews (perhaps imbedded in a tableview's cells) --------------- | (name) | --------------- | (phone num) | --------------- Then when they go back, they can see the new record as well as the 'add record' cell. --------------- | record 1 | --------------- + | add record | --------------- (When they go into record 1 again there would be a delete button) Is there any sample code or libraries which would achieve this? What about InAppSettingsKit? It's more the presentation I'm concerned with. I can handle the saving of data myself.

    Read the article

  • StreamWriter appends random data

    - by void
    Hi I'm seeing odd behaviour using the StreamWriter class writing extra data to a file using this code: public void WriteToCSV(string filename) { StreamWriter streamWriter = null; try { streamWriter = new StreamWriter(filename); Log.Info("Writing CSV report header information ... "); streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Header).ToString("D2", CultureInfo.CurrentCulture), m_InputFilename, m_LoadStartDate, m_LoadEndDate); int recordCount = 0; if (SummarySection) { Log.Info("Writing CSV report summary section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Summary).ToString("D2", CultureInfo.CurrentCulture), categoryResult.Value.StatusString, categoryResult.Value.Count.ToString(CultureInfo.CurrentCulture), categoryResult.Value.Category); recordCount++; } } Log.Info("Writing CSV report cases section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { foreach (CaseLoadResult result in categoryResult.Value.CaseLoadResults) { if ((LoadStatus.Success == result.Status && SuccessCases) || (LoadStatus.Warnings == result.Status && WarningCases) || (LoadStatus.Failure == result.Status && FailureCases) || (LoadStatus.NotProcessed == result.Status && NotProcessedCases)) { streamWriter.Write("\"{0}\",\"{1}\",\"{2}\",\"{3}\",\"{4}\"", ((int)CSVRecordType.Result).ToString("D2", CultureInfo.CurrentCulture), result.Status, result.UniqueId, result.Category, result.ClassicReference); if (RawResponse) { streamWriter.Write(",\"{0}\"", result.ResponseXml); } streamWriter.WriteLine(); recordCount++; } } } streamWriter.WriteLine("\"{0}\",\"{1}\"", ((int)CSVRecordType.Count).ToString("D2", CultureInfo.CurrentCulture), recordCount); Log.Info("CSV report written to '{0}'", fileName); } catch (IOException execption) { string errorMessage = string.Format(CultureInfo.CurrentCulture, "Unable to write XML report to '{0}'", fileName); Log.Error(errorMessage); Log.Error(exception.Message); throw new MyException(errorMessage, exception); } finally { if (null != streamWriter) { streamWriter.Close(); } } } The file produced contains a set of records on each line 0 to N, for example: [Record Zero] [Record One] ... [Record N] However the file produced either contains nulls or incomplete records from further up the file appended to the end. For example: [Record Zero] [Record One] ... [Record N] [Lots of nulls] or [Record Zero] [Record One] ... [Record N] [Half complete records] This also happens in separate pieces of code that also use the StreamWriter class. Furthermore, the files produced all have sizes that are multiples of 1024. I've been unable to reproduce this behaviour on any other machine and have tried recreating the environment. Previous versions of the application didn't exhibite this behaviour despite having the same code for the methods in question. EDIT: Added extra code.

    Read the article

  • Whats wrong with this my SELECt Query >?

    - by user559800
    Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click Dim SQLData As New System.Data.SqlClient.SqlConnection("Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True") Dim cmdSelect As New System.Data.SqlClient.SqlCommand("SELECT COUNT(*) FROM Table1 WHERE Name =" + TextBox1.Text + " And Last = '" + TextBox2.Text + "'", SQLData) SQLData.Open() If cmdSelect.ExecuteScalar > 0 Then Label1.Text = "Record Found ! " & TextBox1.Text & " " & TextBox2.Text Return End If Label1.Text = "Record Not Found ! " SQLData.Close() End Sub I write this code to find whether the record entered in textbox1 and textbox2 exists or not ..if record exist ..then in label1 the text would be RECORD FOUND else NO RECORD FOUND ERROR : **when i enter in textbox1 and textbox2 then on button click event it shows the error : Invalid column name ,,**

    Read the article

  • CRM2011: Home page ribbon enable rule doesn't work properly

    - by nixjojo
    In my lead home page, there is a custom button. The enable rule for that button is: <EnableRule Id="enableruleid"> <SelectionCountRule AppliesTo="SelectedEntity" Minimum="1"></SelectionCountRule> <CustomRule FunctionName="functionname" Library="$Webresource:myjavascript.js"> <CrmParameter Value="SelectedControlSelectedItemIds" /> </CustomRule> </EnableRule> The javascript working fine only for the first time select a record, when you select another record, the javascript doesn't called. For example, I select record A, the button is enabled and it's fine; and then I select record B, the button should be disabled, but it's not, it still enabled. But if I select record B first, the ribbon is disabled as I wish, and then I select record A, the button still disabled.

    Read the article

  • Keep website and webservices warm with zero coding

    - by oazabir
    If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how: First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “c:\program files…\IIS 6.0 ResourceKit\Tools'\tinyget” to the server where your IIS 6.0 or IIS 7 is running. Then create a batch file that will hit the pages and webservices. Something like this: SET TINYGET=C:\Program Files (x86)\IIS Resources\TinyGet\tinyget.exe"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200 First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit. Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code. Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this: "%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200 Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • How to connect to SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5

    - by Luke
    Hi all, sorry for the blast. I'm trying to connect to an SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5. But, when I ran 'rake migrate' it throws the following: rake migrate --trace Hoe.new {...} deprecated. Switch to Hoe.spec. Invoke migrate (first_time) Invoke environment (first_time) Execute environment Execute migrate rake aborted! no such file to load -- odbc (...) C:/Program Files/test/Rakefile:146 (...) So, my Rakefile in the line 146 says: ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) The database.yml has been configured in so many ways without success. I've tried setup to mode in odbc, to configure a system dsn, to completely use the activerecord support for sqlserver but no success at all. The same Rakefile works fine over Postgres and Oracle with the proper gems installed off course. But I cann't get this work. Any help will be appreciated. Thanks in advance!

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • SQL Server 2008: Comparing similar records - Need to still display an ID for a record when the JOIN has no matches

    - by aleppke
    I'm writing a SQL Server 2008 report that will compare genetic test results for animals. A genetic test consists of an animalId, a gene and a result. Not all animals will have the same genes tested but I need to be able to display the results side-by-side for a given set of animals and only include the genes that are present for at least one of the selected animals. My TestResult table has the following data in it: animalId gene result 1 a CC 1 b CT 1 d TT 2 a CT 2 b CT 2 c TT 3 a CT 3 b TT 3 c CC 3 d CC 3 e TT I need to generate a result set that looks like the following. Note that Animal 3 is not being displayed (user doesn't want to see its results) and neither are results for Gene "e" since neither Animal 1 nor Animal 2 have a result for that gene: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b 1 NULL 2 TT c 1 TT 2 NULL d But I can only manage to get this: SireID SireResult CalfID CalfResult Gene 1 CC 2 CT a 1 CT 2 CT b NULL NULL 2 TT c 1 TT NULL NULL d This is the query I'm using. SELECT sire.animalId AS 'SireID' ,sire.result AS 'SireResult' ,calf.animalId AS 'CalfID' ,calf.result AS 'CalfResult' ,sire.gene AS 'Gene' FROM (SELECT s.animalId ,s.result ,m1.gene FROM (SELECT [animalId ] ,result ,gene FROM TestResult WHERE animalId IN (1)) s FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m1 ON s.marker = m1.marker) sire FULL JOIN (SELECT c.animalId ,c.result ,m2.gene FROM (SELECT animalId ,result ,gene FROM TestResult WHERE animalId IN (2)) c FULL JOIN (SELECT DISTINCT gene FROM TestResult WHERE animalId IN (1, 2)) m2 ON c.gene = m2.gene) calf ON sire.gene = calf.gene How do I get the SireIDs and CalfIDs to display their values when they don't have a record associated with a particular Gene? I was thinking of using COALESCE but I can't figure out how to specify the correct animalId to pass in. Any help would be appreciated.

    Read the article

  • Record Disappeared from Mysql Table, How Can I Find Out What Happened?

    - by Jascha
    I got the fire alarm phone call, AIM messages and email today from a client stating "The site is down!, WTF happened?!" Well, after a little digging, it turns out one of the records in a table had been wiped clean, but without removing the row itself. So, I had the representation of data, but a bunch of empty fields. (needless to day I need to write into my code a catch for this.) What my real question is, where can I figure out what happened? I've got access to phpmyadmin and that's about it. I found some access logs on in the root directory of my server, but that just tells me the client was in the admin area I built editing that record, I'd like to know specifically what they did that made all of the data go away. (what query was run etc...) is it possible without real server admin access? is there a neat little php to mysql class that returns data like this? Thanks in advance. -Jascha

    Read the article

  • Correlate GROUP BY and LEFT JOIN on multiple criteria to show latest record?

    - by Sunbird
    In a simple stock management database, quantity of new stock is added and shipped until quantity reaches zero. Each stock movement is assigned a reference, only the latest reference is used. In the example provided, the latest references are never shown, the stock ID's 1,4 should have references charlie, foxtrot respectively, but instead show alpha, delta. How can a GROUP BY and LEFT JOIN on multiple criteria be correlated to show the latest record? http://sqlfiddle.com/#!2/6bf37/107 CREATE TABLE stock ( id tinyint PRIMARY KEY, quantity int, parent_id tinyint ); CREATE TABLE stock_reference ( id tinyint PRIMARY KEY, stock_id tinyint, stock_reference_type_id tinyint, reference varchar(50) ); CREATE TABLE stock_reference_type ( id tinyint PRIMARY KEY, name varchar(50) ); INSERT INTO stock VALUES (1, 10, 1), (2, -5, 1), (3, -5, 1), (4, 20, 4), (5, -10, 4), (6, -5, 4); INSERT INTO stock_reference VALUES (1, 1, 1, 'Alpha'), (2, 2, 1, 'Beta'), (3, 3, 1, 'Charlie'), (4, 4, 1, 'Delta'), (5, 5, 1, 'Echo'), (6, 6, 1, 'Foxtrot'); INSERT INTO stock_reference_type VALUES (1, 'Customer Reference'); SELECT stock.id, SUM(stock.quantity) as quantity, customer.reference FROM stock LEFT JOIN stock_reference AS customer ON stock.id = customer.stock_id AND stock_reference_type_id = 1 GROUP BY stock.parent_id

    Read the article

  • Record the timestamps of slide changes during a live Powerpoint presentation?

    - by StackedCrooked
    I am planning to implement a lecture capture solution. One of the requirements is to record both the presenter and the slideshow. The presenter is recorded with a videocamera obviously, and the slideshow will probably be captured using a tool like Camtasia. Now during playback three components are visible: the presenter, the slides and a table of contents. Clicking a chapter title in the TOC causes the video to navigate to the corresponding section. This means that a mapping must be made between chapter titles and their timestamps in the video. Usually a change of topic is accompanied with a slide change in the Powerpoint presentation. So the timestamps could be deduced from the slidechanges. However, this requires me to detect slide changes during the live presentation. And I don't know how to do that. Anyone here knows how to do detect slide changes? Is there a Powerpoint API where I can connect event handlers or something like that? I'd greatly appreciate your help! Edit This issue is no longer relevant for my current work so this question will not be updated by me. However, you are still free to help others by posting your answers/insights here.

    Read the article

  • How to figure out which record has been deleted in an effiecient way?

    - by janetsmith
    Hi, I am working on an in-house ETL solution, from db1 (Oracle) to db2 (Sybase). We needs to transfer data incrementally (Change Data Capture?) into db2. I have only read access to tables, so I can't create any table or trigger in Oracle db1. The challenge I am facing is, how to detect record deletion in Oracle? The solution which I can think of, is by using additional standalone/embedded db (e.g. derby, h2 etc). This db contains 2 tables, namely old_data, new_data. old_data contains primary key field from tahle of interest in Oracle. Every time ETL process runs, new_data table will be populated with primary key field from Oracle table. After that, I will run the following sql command to get the deleted rows: SELECT old_data.id FROM old_data WHERE old_data.id NOT IN (SELECT new_data.id FROM new_data) I think this will be a very expensive operation when the volume of data become very large. Do you have any better idea of doing this? Thanks.

    Read the article

  • How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • How to return a record from function, executed by INSERT/UPDATE rule (trigger)?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Rails nested attributes with a join model, where one of the models being joined is a new record

    - by gzuki
    I'm trying to build a grid, in rails, for entering data. It has rows and columns, and rows and columns are joined by cells. In my view, I need for the grid to be able to handle having 'new' rows and columns on the edge, so that if you type in them and then submit, they are automatically generated, and their shared cells are connected to them correctly. I want to be able to do this without JS. Rails nested attributes fail to handle being mapped to both a new record and a new column, they can only do one or the other. The reason is that they are a nested specifically in one of the two models, and whichever one they aren't nested in will have no id (since it doesn't exist yet), and when pushed through accepts_nested_attributes_for on the top level Grid model, they will only be bound to the new object created for whatever they were nested in. How can I handle this? Do I have to override rails handling of nested attributes? My models look like this, btw: class Grid < ActiveRecord::Base has_many :rows has_many :columns has_many :cells, :through => :rows accepts_nested_attributes_for :rows, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } accepts_nested_attributes_for :columns, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } end class Column < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :rows, :through => :grid end class Row < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :columns, :through => :grid accepts_nested_attributes_for :cells end class Cell < ActiveRecord::Base belongs_to :row belongs_to :column has_one :grid, :through => :row end

    Read the article

  • Fetching only the first record in the table (via the view) without changing controller?

    - by bgadoci
    I am trying to only fetch the first record in my table for display. I am creating a site where a user can upload multiple images and attach to a post but I only want to display the first image view for each post. For further clarification posts belong_to projects. So when you are on the projects show page you see multiple posts. In this view I only want to display the first image for each post. Is there a way to do this in the view without affecting the controller (as later I want to allow users to browse all photos through the addition of a lightbox). Here is my /views/posts/_post.html.erb code: <% div_for post do %> <% post.photos.each do | photo | %> <%= image_tag(photo.data.url(:large), :alt => '') %> <%= photo.description %> <% end unless post.photos.first.new_record? rescue nil %> <%= link_to h(post.link_title), post.link %> <%= h(post.description) %> <%= link_to 'Manage this post', edit_post_path(post) %> <% end %> UPDATE: I am using a photos model to attach multiple photos to each post and using paperclip here.

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • SQL: Getting the full record with the highest count.

    - by sqlnoob
    I'm trying to write sql that produces the desired result from the data below. data: ID Num Opt1 Opt2 Opt3 Count 1 A A E 1 1 A B J 4 2 A A E 9 3 B A F 1 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 6 C C M 1 desired result: ID Num Opt1 Opt2 Opt3 Count 1 A B J 4 2 A A E 9 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 Essentially I want, for each ID Num, the full record with the highest count. I tried doing a group by, but if I group by Opt1, Opt2, Opt3, this doesn't work because it returns the highest count for each (ID Num, Opt2, Opt3, Opt4) combination which is not what I want. If I only group by ID Num, I can get the max for each ID Num but I lose the information as to which (Opt1, Opt2, Opt3) combination gives this count. I feel like I've done this before, but I don't often work with sql and I can't remember how. Is there an easy way to do this?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >