Search Results

Search found 7884 results on 316 pages for 'ben record'.

Page 43/316 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • How to return a record from function, executed by INSERT/UPDATE rule (trigger)?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Rails nested attributes with a join model, where one of the models being joined is a new record

    - by gzuki
    I'm trying to build a grid, in rails, for entering data. It has rows and columns, and rows and columns are joined by cells. In my view, I need for the grid to be able to handle having 'new' rows and columns on the edge, so that if you type in them and then submit, they are automatically generated, and their shared cells are connected to them correctly. I want to be able to do this without JS. Rails nested attributes fail to handle being mapped to both a new record and a new column, they can only do one or the other. The reason is that they are a nested specifically in one of the two models, and whichever one they aren't nested in will have no id (since it doesn't exist yet), and when pushed through accepts_nested_attributes_for on the top level Grid model, they will only be bound to the new object created for whatever they were nested in. How can I handle this? Do I have to override rails handling of nested attributes? My models look like this, btw: class Grid < ActiveRecord::Base has_many :rows has_many :columns has_many :cells, :through => :rows accepts_nested_attributes_for :rows, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } accepts_nested_attributes_for :columns, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } end class Column < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :rows, :through => :grid end class Row < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :columns, :through => :grid accepts_nested_attributes_for :cells end class Cell < ActiveRecord::Base belongs_to :row belongs_to :column has_one :grid, :through => :row end

    Read the article

  • Fetching only the first record in the table (via the view) without changing controller?

    - by bgadoci
    I am trying to only fetch the first record in my table for display. I am creating a site where a user can upload multiple images and attach to a post but I only want to display the first image view for each post. For further clarification posts belong_to projects. So when you are on the projects show page you see multiple posts. In this view I only want to display the first image for each post. Is there a way to do this in the view without affecting the controller (as later I want to allow users to browse all photos through the addition of a lightbox). Here is my /views/posts/_post.html.erb code: <% div_for post do %> <% post.photos.each do | photo | %> <%= image_tag(photo.data.url(:large), :alt => '') %> <%= photo.description %> <% end unless post.photos.first.new_record? rescue nil %> <%= link_to h(post.link_title), post.link %> <%= h(post.description) %> <%= link_to 'Manage this post', edit_post_path(post) %> <% end %> UPDATE: I am using a photos model to attach multiple photos to each post and using paperclip here.

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • SQL: Getting the full record with the highest count.

    - by sqlnoob
    I'm trying to write sql that produces the desired result from the data below. data: ID Num Opt1 Opt2 Opt3 Count 1 A A E 1 1 A B J 4 2 A A E 9 3 B A F 1 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 6 C C M 1 desired result: ID Num Opt1 Opt2 Opt3 Count 1 A B J 4 2 A A E 9 3 B C K 14 4 A A M 3 5 B D G 5 6 C C E 13 Essentially I want, for each ID Num, the full record with the highest count. I tried doing a group by, but if I group by Opt1, Opt2, Opt3, this doesn't work because it returns the highest count for each (ID Num, Opt2, Opt3, Opt4) combination which is not what I want. If I only group by ID Num, I can get the max for each ID Num but I lose the information as to which (Opt1, Opt2, Opt3) combination gives this count. I feel like I've done this before, but I don't often work with sql and I can't remember how. Is there an easy way to do this?

    Read the article

  • Databound Label text displays old data upon save. Re-open record and data is correct?

    - by Mike Hestness
    I have a windows forms application. I have a main form and I have a button on this form to set a "Qualified" date/time stamp. I have a Databound label control that I set the value when the user clicks the button. This date/time stamp is working as far as displaying but when you click the save button it either shows blank or the previous date/time. If you then then close the record and re-open it the new date/time value is displayed so the data is getting to the database it's just not persisting in the dataset as new data?? Not sure why the databinding isn't refreshing the value. I have noticed this behavior even if I use a textbox, same thing if I do it programatically. If I manually type in a value it persists?? Here is the code I'm using in the click event of my button: string result = string.Empty; string jobOrderID = UnitOfWork.MasterDSBS.MJOBO[0].JC_IDNO.ToString(); string timeNow = DateTime.Now.ToString(); //Call Web service to make the update RadServices.Service1 rsWeb = new RadServices.Service1(); result = rsWeb.SetQualifiedDate(timeNow, jobOrderID ); //Changed the qualified label text. _btnQualify.Text = "Qualified"; rlQualifiedDate.Text = timeNow;

    Read the article

  • Google Apps - Can I configure a wildcard MX record and and catch all emai address for a domain

    - by Rohit
    I am using Google Apps Premier Edition. I want to create a disposable email address service and I want to catch all emails for a domain. This means that I should be able to catch all mails sent to an arbitrary userid and/or arbitrary domain and store them into a single Google Apps account. For example, in a single account I want to get all mails sent to: 1) [email protected] 2) [email protected] without requiring to do any extra configuration in Google Apps for abc or xyz. My app will download mails from this account and process accordingly. I have figured out that I could do (1) by specifying a catch all email address. Is the combination of both (1) and (2) possible?

    Read the article

  • How to record macro of formatting tables in Excel 2010?

    - by GIS Man
    I'm working with Excel 2010 and made over 20 tables in one sheet. I just want to work more efficiently by making a simple macro for auto formatting table. This is the style I want to apply with the macro: Font: 10, Bold, Arial Borders: All borders Text: Center Table: 3*5 (row * column) Cell tile for header table only (any colors) I've uploaded a sample table with that style, if my question is not clear enough. Thanks for any help!

    Read the article

  • Can the Nikon D50 be hacked to record video?

    - by andy
    I mean hacked in terms of software only, and minor hardware hacking if necessary, without of course breaking the camera. Also, if possible, video should be able to be recorded at an acceptable frame rate, around 20fps. thanks p.s. Reason for this question: Just to give some context before people tell me to buy a new camera! For still photography I now almost exclusively shoot film with a Nikon F3, and my D50 is gathering dust. I want to shoot some video of my Capoeria classes, for personal use, and thought maybe I could put my D50 to good use. Yes, I could buy a D5000 or older D90, but really I just want to shoot some video with my nice wide Nikkor lenses at no extra cost!

    Read the article

  • Is there a way to read the contents of the master boot record?

    - by Codezilla
    Reading another question on here it made me curious if it's possible to actually read the contents of the mbr. As I understand it, there's a certain area at the very front of the partition that lists this information. I'm curious if it's sort of like an ini file or some sort of script that runs and tells the computer what it needs to know about where to boot from and other information like sectors, heads, cylinders that's important. I don't know much about what would be in it, but I thought it'd be interesting to learn more about the specifics.

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • xvidcap: Error accessing sound input from /dev/dsp

    - by stivlo
    I'm running Ubuntu 11.10 and I'm trying xvidcap to record a screencast with audio from the microphone, however it can't record any sound: $ xvidcap --file appo.avi --cap_geometry 700x500-0+0 Error accessing sound input from /dev/dsp Sound disabled! Sure enough /dev/dsp doesn't even exist: $ sudo ls -lh /dev/dsp ls: cannot access /dev/dsp: No such file or directory I found a blog post about fixing xvidcap sound input, however if I try the suggestion I get: $ sudo modprobe snd-pcm-oss FATAL: Module snd_pcm_oss not found. So the question is, how can I create /dev/dsp? The problem behind the problem is: how can I record sound from the microphone with xvidcap? So workarounds are welcome too. UPDATE: I've followed the suggestion of James, and something has improved. The error accessing /dev/dsp is gone, however now I get: [oss @ 0x8e0c120] Estimating duration from bitrate, this may be inaccurate xtoffmpeg.c add_audio_stream(): Can't initialize fifo for audio recording Now when I record xvidcap appears in the recording tab of pavucontrol and I can choose Audio stream from Internal Audio Analog Stereo or Monitor of Internal Audio Analog Stereo, I tried both just in case, but the video is still mute. UPDATE 2: I found that "Monitor of" is the one to record application sounds, while for microphone, I should choose "Internal Audio Analog Stereo". To rule out other problems, such as with the microphone, I tried with gnome-sound-recorder and it works. Actually I jumped on my chair, since the volume was too high! :-)

    Read the article

  • Mapping an amazon server to a domain name registered with name.com

    - by S4M
    I have an amazon S3 web server and a domain name registered in name.com (the name is sam-experiments.com). I am trying to have a static page hosted on the amazon web server to be displayed on http://www.sam-experiments.com On the web server side, my bucket name is 'www.sam-experiments.com', and it links to here: http://www.sam-experiments.com.s3-website-eu-west-1.amazonaws.com/ On name.com, I added a new record with the followin characteristics: Record Type: CNAME Record Host: www.sam-experiments.com Record Answer: www.sam-experiments.com.s3.amazonaws.com. (as specified in the documentation here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/VirtualHosting.html#VirtualHostingCustomURLs) TTL: 300 However, nothing gets displayed on www.sam-experiments.com, and I am not able to find what I am doing wrong. I really would appreciate some tip. Thanks! Note: I already posted this question in stackoverflow, but didnt get any answer, so I thought posting here may be more appropriate.

    Read the article

  • C++ Set Erase Entry Question

    - by Wallace
    Hi. I encountered a problem here. I'm using C++ multiset. This is the test file. Score: 3-1 Ben Steven Score: 1-0 Ben Score: 0-0 Score: 1-1 Cole Score: 1-2 Ben I'm using while loop and ifstream (fin1) to read in from the test file above. multiset<string, less<string> > myset; while(!fin1.eof()) { fin1 >> scoreName; if(scoreName == "Score:") { //calculates number of matches played } else { goalCheck = scoreName.substr(1,1); if(goalCheck == "-") { string lGoal, rGoal; lGoal = scoreName.substr(0,1); rGoal = scoreName.substr(2,1); int leftGoal, rightGoal; leftGoal = atoi(lGoal.c_str()); rightGoal = atoi(rGoal.c_str()); if(leftGoal > rightGoal) //if team wins { //some computations } else if(leftGoal < rightGoal) //if team loses { //computations } else if(leftGoal == rightGoal) //if team draws { //computations } else { myset.insert(myset.begin(), scoreName); } } } I'm inserting all names into myset (regardless of wins/loses/draws) in my last else statement. But I only require the names of those matches who won/draw. Those names whose matches lost will not be included in myset. In the test file above, there's only one match that lost (1-2) and I wanted to remove "Ben". How can I do that? I tried to use myset.erase(), but I'm not sure how to get it point to Ben and remove it from myset. Any help is much appreciated. Thanks.

    Read the article

  • How do I use MediaRecorder to record video without causing a segmentation fault?

    - by rabidsnail
    I'm trying to use android.media.MediaRecorder to record video, and no matter what I do the android runtime segmentation faults when I call prepare(). Here's an example: public void onCreate(Bundle savedInstanceState) { Log.i("video test", "making recorder"); MediaRecorder recorder = new MediaRecorder(); contentResolver = getContentResolver(); try { super.onCreate(savedInstanceState); Log.i("video test", "--------------START----------------"); SurfaceView target_view = new SurfaceView(this); Log.i("video test", "making surface"); Surface target = target_view.getHolder().getSurface(); Log.i("video test", target.toString()); Log.i("video test", "new recorder"); recorder = new MediaRecorder(); Log.i("video test", "set display"); recorder.setPreviewDisplay(target); Log.i("video test", "pushing surface"); setContentView(target_view); Log.i("video test", "set audio source"); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); Log.i("video test", "set video source"); recorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); Log.i("video test", "set output format"); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); Log.i("video test", "set audio encoder"); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); Log.i("video test", "set video encoder"); recorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); Log.i("video test", "set max duration"); recorder.setMaxDuration(3600); Log.i("video test", "set on info listener"); recorder.setOnInfoListener(new listener()); Log.i("video test", "set video size"); recorder.setVideoSize(320, 240); Log.i("video test", "set video frame rate"); recorder.setVideoFrameRate(15); Log.i("video test", "set output file"); recorder.setOutputFile(get_path(this, "foo.3gp")); Log.i("video test", "prepare"); recorder.prepare(); Log.i("video test", "start"); recorder.start(); Log.i("video test", "sleep"); Thread.sleep(3600); Log.i("video test", "stop"); recorder.stop(); Log.i("video test", "release"); recorder.release(); Log.i("video test", "-----------------SUCCESS------------------"); finish(); } catch (Exception e) { Log.i("video test", e.toString()); recorder.reset(); recorder.release(); Log.i("video tets", "-------------------FAIL-------------------"); finish(); } } public static String get_path (Context context, String fname) { String path = context.getFileStreamPath("foo").getParentFile().getAbsolutePath(); String res = path+"/"+fname; Log.i("video test", "path: "+res); return res; } class listener implements MediaRecorder.OnInfoListener { public void onInfo(MediaRecorder recorder, int what, int extra) { Log.i("video test", "Video Info: "+what+", "+extra); } }

    Read the article

  • Screen shots and documentation on the cheap

    - by Kyle Burns
    Occasionally I am surprised to open up my toolbox and find a great tool that I've had for years and never noticed.  The other day I had just such an experience with Windows Server 2008.  A co-worker of mine was squinting to read to screenshots that he had taken using the "Print Screen, paste" method in WordPad and asked me if there was a better tool available at a reasonable cost.  My first instinct was to take a look at CamStudio for him, but I also knew that he had an immediate need to take some more screenshots, so I decided to check and see if the Snipping Tool found in Windows 7 is also available in Windows Server 2008.  I clicked the Start button and typed “snip” into the search bar and while the Snipping Tool did not come up, a Control Panel item labeled “Record steps to reproduce a problem” did. The application behind the Control Panel entry was “Problem Steps Recorder” (PSR.exe) and I have confirmed that it is available in Windows 7 and Windows Server 2008 R2 but have not checked other platforms.  It presents a pretty minimal and intuitive interface in providing a “Start Record”, “Stop Record”, and “Add Comment” button.  The “Start Record” button shockingly starts recording and, sure enough, the “Stop Record” button stops recording.  The “Add Comment” button prompts for a comment and for you to highlight the area of the screen to which your comment is related.  Once you’re done recording, the tool outputs an MHT file packaged in a ZIP archive.  This file contains a series of screen shots depicting the user’s interactions and giving timestamps and descriptive text (such as “User left click on “Test” in “My Page – Windows Internet Explorer”) as well as the comments they made along the way and some diagnostics about the applications captured. The Problem Steps Recorder looks like a simple solution to the most common of my needs for documentation that can turn “I can’t understand how to make it do what you’re reporting” to “Oh, I see what you’re talking about and will fix it right away”.  I you’re like me and haven’t yet discovered this tool give it a whirl and see for yourself.

    Read the article

  • 3D Printed Records Bring New Tunes to Iconic Fisher-Price Toy Player

    - by Jason Fitzpatrick
    Have an old toy Fisher-Price record player your kids aren’t exactly enamored with? Now, thanks to the miracle of 3D printing, you can create new records for it. Courtesy of Fred Murphy, this Instructables tutorial will guide you through the process of taking music and encoding it in a 3D printer file that will yield a tiny plastic record the Fisher-Prince record player can play. Check out the video above to see the finished product or hit up the link below to read the full tutorial. 3D Printing for the Fisher-Price Record Player [via Make] How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • overheating when using flash

    - by Ben
    I am having overheating issues when watching flash videos. Here is all the relevant information that I could think of. Please help. --Often when playing flash videos it operates at 95 degrees C. --Only gets that hot when on full screen (but still in the 80s otherwise) --Sometimes reaches 100 degrees and shuts down. --fan runs at around 4500 RPM when hot (not to mention I can here it speed up) --Happens under Ubuntu 12.04 and 12.10 (did not use Ubuntu before then) --Overheating happens on more than one site (for example youtube and Comedy Central) --My computer does not overheat otherwise. --Usually runs around 50-60 degrees C. --Maybe if compiling or doing updates can it get to 70 or 75 degree C. --Have a dual boot, and under Windows I do not have this issue. --I use firefox, but have the same issue if using Chrome --I have a Lenovo T410, currently running Ubuntu 12.10 --Intel® Core™ i5 CPU M 540 @ 2.53GHz × 4 --4 GB of memory, 64 bit OS --Graphics: NVS 3100M/PCIe/SSE2 Thanks and let me know if you need more info, Ben

    Read the article

  • eXML-PARSER output contains unwanted hash references

    - by seaworthy
    So I wrote a parser routine to take one xml file and reparse into another one. This code I later modified to split a large xml file into many small xml files. I am having a problem with an output. Parsing works fine the only thing output also includes unwanted strings like HASH(0x19f9b58), I am not sure why and need set of friendly eyes. use Encode; use XML::Parser; my $parser = XML::Parser->new( Handlers => {Start => \&handle_elem_start, End => \&handle_elem_end,Char => \&handle_char_data,}); my $record; my $file = shift @ARGV; if( $file ) {$parser->parsefile( $file );} exit; sub handle_elem_start { my( $expat, $name, %atts ) = @_; if ($name eq 'articles'){$file="_data.xml";unlink($file);} $record .= "<"; $record .= "$name"; foreach my $key (keys %atts){$record .= " $key=\"$atts{$key}\"";} $record .= ">"; } sub handle_char_data { my( $expat, $text ) = @_; $text = decode_utf8( $text ); $record .= "$text"; } sub handle_elem_end { my( $expat, $name ) = @_; $record .= "</$name>"; if( $name eq 'article' ) { open (MYFILE, '>>'.$file); print MYFILE $record; close (MYFILE); print $record; $record = {}; } return unless( $name eq 'article' ); } Sample output: ... </article>HASH(0x19f9b40) <article doi="10.1103/PhysRevSeriesI.9.304"> <journal short="Phys. Rev. (Series I)" jcode="PRI">Physical Review (Series I)</journal> <volume>9</volume> <issue printdate="1899-11-00">5</issue> <fpage>304</fpage> <lpage>309</lpage> <seqno>1</seqno> <price></price><tocsec>Articles</tocsec> <arttype type="article"></arttype><doi>10.1103/PhysRevSeriesI.9.304</doi> <title>An Investigation of the Magnetic Qualities of Building Brick</title> <authgrp> <author><givenname>O.</givenname><middlename>A.</middlename><surname>Gage</surname></author> <author><givenname>H.</givenname><middlename>E.</middlename><surname>Lawrence</surname></author> </authgrp> <cpyrt> <cpyrtdate date="1899"></cpyrtdate><cpyrtholder>The American Physical Society</cpyrtholder> </cpyrt> </article>HASH(0x19f9b58) ... HASH strings are not wanted, please advise.

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >