Search Results

Search found 6852 results on 275 pages for 'ptr record'.

Page 54/275 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • PostgreSQL 8.4 won't start after blackout

    - by RiZe
    I have problem with starting PostgreSQL 8.4 on Ubuntu 9.10 Server after blackout. When I try to connect to the database it says: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. When I try to start it by using command sudo -u postgres /etc/init.d/postgresql-8.4 start * Starting PostgreSQL 8.4 database server [ OK ] Netstat output netstat -tulp (No info could be read for "-p": geteuid()=1000 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:postgresql *:* LISTEN - tcp 0 0 192.168.1.35:svn *:* LISTEN - tcp 0 0 192.168.1.35:http-alt *:* LISTEN - tcp 0 0 *:ssh *:* LISTEN - tcp6 0 0 localhost:postgresql [::]:* LISTEN - tcp6 0 0 [::]:ssh [::]:* LISTEN - udp 0 0 *:bootpc *:* - But still don't work so lets restart it sudo -u postgres /etc/init.d/postgresql-8.4 restart * Restarting PostgreSQL 8.4 database server * The PostgreSQL server failed to start. Please check the log output: 2009-11-30 13:39:37 CET LOG: database system was shut down at 2009-11-30 13:39:33 CET 2009-11-30 13:39:37 CET LOG: autovacuum launcher started 2009-11-30 13:39:37 CET LOG: database system is ready to accept connections 2009-11-30 13:39:37 CET LOG: incomplete startup packet 2009-11-30 13:39:38 CET LOG: server process (PID 2240) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:37 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D464C 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:38 CET LOG: autovacuum launcher started 2009-11-30 13:39:38 CET LOG: database system is ready to accept connections 2009-11-30 13:39:38 CET LOG: server process (PID 2248) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D4690 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2256) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:39 CET LOG: record with zero length at 0/11D46D4 2009-11-30 13:39:39 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2264) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:39 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D4718 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:40 CET LOG: server process (PID 2272) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:40 CET LOG: terminating any other active server processes 2009-11-30 13:39:40 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:40 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:40 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D475C 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2280) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47A0 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2288) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47E4 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2296) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D4828 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:42 CET LOG: autovacuum launcher started 2009-11-30 13:39:42 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2304) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D486C 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections 2009-11-30 13:39:43 CET LOG: server process (PID 2312) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:43 CET LOG: terminating any other active server processes 2009-11-30 13:39:43 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:43 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:43 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:43 CET LOG: record with zero length at 0/11D48B0 2009-11-30 13:39:43 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections [fail] So what happened and what can I do to solve this? Thanks for replies

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • Passing sql results to views hard-codes views to database column names

    - by Galen
    I just realized that i may not be following best practices in regards to the MVC pattern. My issue is that my views "know" information about my database Here's my situation in psuedo code... My controller invokes a method from my model and passes it directly to the view view.records = tableGateway.getRecords() view.display() in my view each records as record print record.name print record.address ... In my view i have record.name and record.address, info that's hard-coded to my database. Is this bad? What other ways around it are there other than iterating over everything in the controller and basically rewriting the records collection. And that just seems silly. Thanks

    Read the article

  • DataMapper is only returning the last part of this query

    - by Josh K
    So I'm using the following: $r = new Record(); $r->select('ip, count(*) as ipcount'); $r->group_by('ip'); $r->order_by('ipcount', 'desc'); $r->limit(5); $r->get(); foreach($r->all as $record) { echo($record->ip." "); echo($record->ipcount." <br />"); } And I only get the last (fifth) record echo'ed out and no ipcount echoed. Is there a different way to go around this? I'm working on learning DataMapper (hence the questions) and need to figure some of this out. I haven't quite wrapped my head around the whole ORM thing.

    Read the article

  • UpdateModel prefix - ASP.NET MVC

    - by Kristoffer Ahl
    I'm having trouble with TryUpdateModel(). My form fields are named with a prefix but I am using - as my separator and not the default dot. <input type="text" id="Record-Title" name="Record-Title" /> When I try to update the model it does not get updated. If i change the name attribute to Record.Title it works perfectly but that is not what I want to do. bool success = TryUpdateModel(record, "Record"); Is it possible to use a custom separator?

    Read the article

  • Authlogic: Create records on other users' behalf

    - by Friðrik
    Hi Using Authlogic, what is the best way to create a record in rails on other users' behalf? Description: I have a c++ server which handles Tcp connections from many c++ clients, and I want the c++ server to create a new record in the rails database using its REST api. However, the c++ server needs to be authenticated before creating that record. What I want is to attach the original user ID (from the c++ client) to the record (but not the servers) so I know from which user the record came from. One way is for the c++ client to send its persistence token over to the c++ server which sends that token as a parameter to the create action, does that make sense? or are there maybe some better ways to do this? I have a rails app which uses authlogic for authentication. I also have another c++ client which is logs in and provides I have a c++ server which uses

    Read the article

  • Solr date range problem and specific date problem

    - by Hamid
    I index some data include date in solr but when search for specific date, i get some record (not all record) include some record in next day for example: http://localhost:8080/solr/select/?q=pubdate:[2010-03-25T00:00:00Z TO 2010-03-25T23:59:59Z]&start=0&rows=10&indent=on&sort=pubdate desc i have 625000 record in 2010-03-25 but above query result return 325412 that include 14 record from 2010-03-26. Also i try with below query, but not get right result http://localhost:8080/solr/select/?q=pubdate:"2010-03-25T00:00:00Z"&start=0&rows=10&indent=on&sort=pubdate desc How to get right result for specific date ??? Could you please help me? Thanks in advanced Hamid

    Read the article

  • set variables in an MVC application?

    - by Melissa
    I am trying to learn about the Orchard MVC application. I see the following code but cannot understand what it is doing. Can someone explain what this: User.As<UserPart>().Record.UserName = value; means? public class UserEditViewModel { [Required] public string UserName { get { return User.As<UserPart>().Record.UserName; } set { User.As<UserPart>().Record.UserName = value; } } [Required] public string Email { get { return User.As<UserPart>().Record.Email; } set { User.As<UserPart>().Record.Email = value; } } public IContent User { get; set; } }

    Read the article

  • php mysql flex unicode

    - by JonoB
    I have a problem with saving the £ symbol to a mysql database. I am running a flex front end, with a php + mysql backend When I save a record from flex, the string gets sent to the server as "This amount is £10" php views the string as above, and when it gets saved into the DB, it gets saved as "This amount is £10". My understanding is that this is correct based on MySQL or PHP is appending a  whenever the £ is used I now retrieve the above record, and it gets sent to flex as "This amount is £10". Flex correctly displays this in a textarea as "This amount is £10" I change another field in the same record in flex, and re-save the transaction. The string now gets sent to the server as "This amount is £10" The record is now saved into the DB as "The amount is £10". Each time the record is re-saved, this effect snowballs. Thanks for any advice you can give.

    Read the article

  • How to do animation using swing and clojure ?

    - by Humberto Pinheiro
    I'm trying to animate a chess piece in a board. First I created a java.util.Timer object that "scheduleAtFixedRate" a TimerTask implemented as a proxy function. So I kept a record of the piece to move (piece-moving-record) and when it's apropriate (when the user move the piece using the mouse) the TimerTask proxy function should be test if the record is not nil and execute the piece-moving function. The piece-moving function just updates the x and y coordinates of the piece, according to a vector pre-calculated. I put a add-watch on the piece-moving-record so when it changes it should repaint the board (canvas). The paint method tests if this piece-moving-record is not nil to paint it. The problem is that the animation doesn't appear. The piece just jump to the destiny, without the movement between. There is some problem with the animation scheme ou there is a better way to do it?

    Read the article

  • MySQL: Is it possible to return a "mixed" dataset?

    - by Tom
    Hi, I'm wondering if there's some clever way in MySQL to return a "mixed/balanced" dataset according to a specific criterion? To illustrate, let's say that there are potential results in a table that can be of Type 1 or Type 2 (i.e. a column has a value 1 or 2 for each record). Is there a clever query that would be able to directly return results alternating between 1 and 2 in sequence: 1st record is of type 1, 2nd record is of type 2, 3rd record is of type 1, 4th record is of type 2, etc... Apologies if the question is silly, just looking for some options. Of course, I could return any data and do this in PHP, but it does add some code. Thanks.

    Read the article

  • Postgresql sequences

    - by Dylan
    When I delete all records from a Postgresql table and then try to reset the sequence to start a new record with number 1 when it is inserted, i get different results : SELECT setval('tblname_id_seq', (SELECT COALESCE(MAX(id),1) FROM tblname)); This sets the current value of the sequence to 1, but the NEXT record (actually the first because there are no records yet) gets number 2! And I can't set it to 0, because the minimum value in the sequence is 1! When I use : ALTER SEQUENCE tblname_id_seq RESTART WITH 1; the first record that is inserted actually gets number 1 ! But the above code doesn't accept a SELECT as a value instead of 1. I wish to reset the sequence to number 1 when there are no records, and the first record then should start with 1. But when there ARE already records in the table, I want to reset the sequence so that the next record that is inserted will get {highest}+1 Does anyone have a clear solution for this?

    Read the article

  • C Struct : typedef Doubt !

    - by Mahesh
    In the given code snippet, I expected the error symbol Record not found. But it compiled and ran fine on Visual Studio 2010 Compiler. I ran it as a C program from Visual Studio 2010 Command Prompt in the manner - cl Record.c Record Now the doubt is, doesn't typedef check for symbols ? Does it work more like a forward declaration ? #include "stdio.h" #include "conio.h" typedef struct Record R; struct Record { int a; }; int main() { R obj = {10}; getch(); return 0; }

    Read the article

  • regular expression not behaving as expected - Python

    - by philippe
    I have the following function which is supposed to read a .html file and search for <input> tags, and inject a <input type='hidden' > tag into the string to be shown into the page. However, that condition is never met:( e.g the if statement is never executed. ) What's wrong with my regex? def print_choose( params, name ): filename = path + name f = open( filename, 'r' ) records = f.readlines() print "Content-Type: text/html" print page = "" flag = True for record in records: if re.match( '<input*', str(record) ) != None: print record page += record page += "<input type='hidden' name='pagename' value='psychology' />" else: page += record print page Thank you

    Read the article

  • INSERT INTO othertbl SELECT * tbl

    - by Harry
    Current situation: INSERT INTO othertbl SELECT * FROM tbl WHERE id = '1' So i want to copy a record from tbl to othertbl. Both tables have an autoincremented unique index. Now the new record should have a new index, rather then the value of the index of the originating record else copying results in a index not unique error. A solution would be to not use the * but since these tables have quite some columns i really think it's getting ugly. So,.. is there a better way to copy a record which results in a new record in othertbl which has a new autoincremented index without having to write out all columns in the query and using a NULL value for the index. -hope it makes sense....-

    Read the article

  • wcf http 504: Working on a mystery

    - by James Fleming
    Ok,  So you're here because you've been trying to solve the mystery of why you're getting a 504 error. If you've made it to this lonely corner of the Internet, then the advice you're getting from other bloggers isn't the answer you are after. It wasn't the answer I needed either, so once I did solve my problem, I thought I'd share the answer with you. For starters, if by some miracle, you landed here first you may not already know that the 504 error is NOT coming from IIS or Casini, that response code is coming from Fiddler. HTTP/1.1 504 Fiddler - Receive Failure Content-Type: text/html Connection: close Timestamp: 09:43:05.193 ReadResponse() failed: The server did not return a response for this request.       The take away here is Fiddler won't help you with the diagnosis and any further digging in that direction is a red herring. Assuming you've dug around a bit, you may have arrived at posts which suggest you may be getting the error because you're trying to hump too much data over the wire, and have an urgent need to employ an anti-pattern: due to a special case: http://delphimike.blogspot.com/2010/01/error-504-in-wcfnet-35.html Or perhaps you're experiencing wonky behavior using WCF-CustomIsolated Adapter on Windows Server 2008 64bit environment, in which case the rather fly MVP Dwight Goins' advice is what you need. http://dgoins.wordpress.com/2009/12/18/64bit-wcf-custom-isolated-%E2%80%93-rest-%E2%80%93-%E2%80%9C504%E2%80%9D-response/ For me, none of that was helpful. I could perform a get on a single record  http://localhost:8783/Criterion/Skip(0)/Take(1) but I couldn't get more than one record in my collection as in:  http://localhost:8783/Criterion/Skip(0)/Take(2) I didn't have a big payload, or a large number of objects (as you can see by the size of one record below) - - A-1B f5abd850-ec52-401a-8bac-bcea22c74138 .biological/legal mother This item refers to the supervisor’s evaluation of the caseworker’s ability to involve the biological/legal mother in the permanency planning process. 75d8ecb7-91df-475f-aa17-26367aeb8b21 false true Admin account 2010-01-06T17:58:24.88 1.20 764a2333-f445-4793-b54d-1c3084116daa So while I was able to retrieve one record without a hitch (thus the record above) I wasn't able to return multiple records. I confirmed I could get each record individually, (Skip(1)/Take(1))so it stood to reason the problem wasn't with the data at all, so I suspected a serialization error. The first step to resolving this was to enable WCF Tracing. Instructions on how to set it up are here: http://msdn.microsoft.com/en-us/library/ms733025.aspx. The tracing log led me to the solution. The use of type 'Application.Survey.Model.Criterion' as a get-only collection is not supported with NetDataContractSerializer.  Consider marking the type with the CollectionDataContractAttribute attribute or the SerializableAttribute attribute or adding a setter to the property. So I was wrong (but close!). The problem was a deserializing issue in trying to recreate my read only collection. http://msdn.microsoft.com/en-us/library/aa347850.aspx#Y1455 So looking at my underlying model, I saw I did have a read only collection. Adding a setter was all it took.         public virtual ICollection<string> GoverningResponses         {             get             {                 if (!string.IsNullOrEmpty(GoverningResponse))                 {                     return GoverningResponse.Split(';');                 }                 else                     return null;             }                  } Hope this helps. If it does, post a comment.

    Read the article

  • How can I generate a view or projection matrix for OpenGL 3.+

    - by Ken
    I'm transitioning from OpenGL 2 to OpenGL 3.+ and to GLSL 1.5. I'm trying to avoid using the deprecated features. My question how do we now generate the view or projection matrix. I was using the matrix stack to calculate the projection matrix for me; GLfloat ptr[16]; gluPerspective(...); glGetFloatv(GL_MODELVIEW_MATRIX, ptr); //then pass ptr via a uniform to the shader But obviously the matrix stack is deprecated. So this approach is not the best an option going forward. I have the 'Red Book', 7th ed, which covers 3.0 & 3.1 and it still uses the deprecated matrix functions in it's examples. I could write some utility-code myself to generate the matrices. But I don't want to re-invent this particular wheel, especially when this functionality is required for every 3D graphics program. What is the accepted way to generate world,view & projection matrices for OpenGL? Is there an emerging 'standard' library for this? Or is there some other hidden (to me) functionality in OpenGL/GLSL which I have overlooked?

    Read the article

  • Does RDNS for mail server have to match the mail server hostname exactly?

    - by threecheeseopera
    Typically when setting up a mail server, I create an rDNS record for the mail server IP to match the mail server hostname (ex: mail.example.com). Can I instead set the rDNS ptr to match the parent domain (e.g. example.com), if this server is being used for multiple purposes, and still send mail successfully (i.e. not be classified as spam b/c of mismatched rDNS)? Thanks! EDIT: The article at http://en.wikipedia.org/wiki/Forward_Confirmed_reverse_DNS seems to indicate that it might be more complicated than I had thought. For instance, 1) I did not know that you could have multiple PTR records for a given IP; 2) it appears that as long as each PTR record matches an A record, everything is good (basically nullifying my question). Would you agree?

    Read the article

  • Run script when POST data is sent to Apache

    - by Nathan Adams
    Among my several years of running servers there seems to be a pattern with most spam activity. My question/idea is that is there a way to tell Apache to run a script when POST data is detected? What I would want to do is perform a reverse DNS lookup on the client's IP address, and then perform a DNS lookup on the hostname in the PTR record. Afterwards, perform some checks, excuse the pseudo-code: if PTR does not exist: deny POST request if IP of PTR hostname = client's IP Allow POST request else deny POST request Though I don't care about GET requests, even though they can be just as malicious, this idea is targeted towards spam comments which use POST data to send the comment data to the web server. In order to make sure there isn't much of a time delay, I would run my own recursive DNS server. Please do note, this isn't meant to be a sliver bullet to spam, but it should decrease the volume. Possible or impossible?

    Read the article

  • What does the Sys_PageIn() function do in Quake?

    - by Philip
    I've noticed in the initialization process of the original Quake the following function is called. volatile int sys_checksum; // **lots of code** void Sys_PageIn(void *ptr, int size) { byte *x; int j,m,n; //touch all memory to make sure its there. The 16-page skip is to //keep Win 95 from thinking we're trying to page ourselves in (we are //doing that, of course, but there's no reason we shouldn't) x = (byte *)ptr; for (n=0 ; n<4 ; n++) { for (m=0; m<(size - 16 * 0x1000) ; m += 4) { sys_checksum += *(int *)&x[m]; sys_checksum += *(int *)&x[m + 16 * 0x10000]; } } } I think I'm just not familiar enough with paging to understand this function. the void* ptr passed to the function is a recently malloc()'d piece of memory that is size bytes big. This is the whole function - j is an unreferenced variable. My best guess is that the volatile int sys_checksum is forcing the system to physically read all of the space that was just malloc()'d, perhaps to ensure that these spaces exist in virtual memory? Is this right? And why would someone do this? Is it for some antiquated Win95 reason?

    Read the article

  • Seperating Javascript and Html, when dynamically adding html via javascript

    - by optician
    I am currently building a very dynamic table for a list application, which will basically perform basic CRUD functions via AJAX. What I would like to do is separate the visual design and javascript to the point where I can change the design side without touching the JS side. This would only work where the design stays roughly the same(i would like to use it for rapid protyping) Here is an example. <table> <tr><td>record-123</td><td>I am line 123</td><td>delete row</td></tr> <tr><td>record-124</td><td>I am line 124</td><td>delete row</td></tr> <tr><td>record-125</td><td>I am line 125</td><td>delete row</td></tr> <tr><td>add new record</td></tr> </table> Now, when I add a new record, I would like to insert a new row of html, but I would rather not put this html into the javascript file. What I am considering is creating a row like this on the page, near the table. <tr style='visble:none;' id='template-row'><td>record-id</td><td>content-area</td><td>delete row</td></tr> And when I come to add the new row, I search the page for the tags with the id=template-row , and then grab it, do a string replace on it, and then put it in the right place in the page. As long as the design doesn't shift radically, and I keep the placeholder strings the same, it means designs can be quickly modified without touching the js. Can any give any advice on a methodology like this?

    Read the article

  • Separating Javascript and Html, when dynamically adding html via javascript

    - by optician
    I am currently building a very dynamic table for a list application, which will basically perform basic CRUD functions via AJAX. What I would like to do is separate the visual design and javascript to the point where I can change the design side without touching the JS side. This would only work where the design stays roughly the same(i would like to use it for rapid protyping) Here is an example. <table> <tr><td>record-123</td><td>I am line 123</td><td>delete row</td></tr> <tr><td>record-124</td><td>I am line 124</td><td>delete row</td></tr> <tr><td>record-125</td><td>I am line 125</td><td>delete row</td></tr> <tr><td>add new record</td></tr> </table> Now, when I add a new record, I would like to insert a new row of html, but I would rather not put this html into the javascript file. What I am considering is creating a row like this on the page, near the table. <tr style='visble:none;' id='template-row'><td>record-id</td><td>content-area</td><td>delete row</td></tr> And when I come to add the new row, I search the page for the tags with the id=template-row , and then grab it, do a string replace on it, and then put it in the right place in the page. As long as the design doesn't shift radically, and I keep the placeholder strings the same, it means designs can be quickly modified without touching the js. Can any give any advice on a methodology like this?

    Read the article

  • Problem using AVAudioRecorder.

    - by tek3
    Hi all, I am facing a strange problem with AVAudioRecorder. In my application i need to record audio and play it. I am creating my player as : if(recorder) { if(recorder.recording) [recorder stop]; [recorder release]; recorder = nil; } NSString * filePath = [NSHomeDirectory() stringByAppendingPathComponent: [NSString stringWithFormat:@"Documents/%@.caf",songTitle]]; NSDictionary *recordSettings = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithFloat: 44100.0],AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatAppleIMA4],AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax],AVEncoderAudioQualityKey,nil]; recorder = [[AVAudioRecorder alloc] initWithURL: [NSURL fileURLWithPath:filePath] settings: recordSettings error: nil]; recorder.delegate = self; if ([recorder prepareToRecord] == YES){ [recorder record]; I am releasing and creating player every time i press record button. But the problem is that ,AVAudiorecorder is taking some time before starting to record , and so if i press record button multiple times continuously ,my application freezes for some time. The same code works fine without any problem when headphones are connected to device...there is no delay in recording, and the app doesn't freeze even if i press record button multiple times. Any help in this regard will be highly appreciated. Thanx in advance.

    Read the article

  • Problem merging similar XML files with XSL

    - by LOlliffe
    I have two documents that I need to merge, that happen in a way that I don't seem to be able to find covered in other examples. Namely, that it needs to match not only on a node's attribute at one level, but also on the value of an attribute a node level below that, to get that node's value. I'm trying to take this sample: <?xml version="1.0" encoding="UTF-8" ?> <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Art</marc:subfield> </marc:datafield> <marc:datafield tag="949" ind1=" " ind2=" "> <marc:subfield code="i">Review of conference proceedings</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Byzantine</marc:subfield> </marc:datafield> </marc:record> </marc:collection> And when the value of "datafield" '035', "subfield" 'a' matches e.g. "12345" <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fo="http://www.w3.org/1999/XSL/Format"> <marc:record> <marc:datafield ind2=" " ind1=" " tag="035"> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="x">Historiens et critiques d'art</marc:subfield> <marc:subfield code="x">Dietrichson, Lorentz, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">General works</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="b">Historiens et critiques d'art</marc:subfield> <marc:subfield code="b">Smith, John, 1834-1917</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield ind2=" " ind1=" " tag="035"> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Lange, Julius Henrik, 1838-1896</marc:subfield> </marc:datafield> </marc:record> </marc:collection> The result should be: <?xml version="1.0" encoding="UTF-8" ?> <marc:collection xmlns:marc="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">12345</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Art</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="x">Historiens et critiques d'art</marc:subfield> <marc:subfield code="x">Dietrichson, Lorentz, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">General works</marc:subfield> </marc:datafield> <marc:datafield ind2=" " ind1=" " tag="654"> <marc:subfield code="a">Généralités</marc:subfield> <marc:subfield code="b">Historiens et critiques d'art</marc:subfield> <marc:subfield code="b">Smith, John, 1834-1917</marc:subfield> </marc:datafield> <marc:datafield tag="949" ind1=" " ind2=" "> <marc:subfield code="i">Review of conference proceedings</marc:subfield> </marc:datafield> </marc:record> <marc:record> <marc:datafield tag="035" ind1=" " ind2=" "> <marc:subfield code="a">54321</marc:subfield> </marc:datafield> <marc:datafield tag="041" ind1=" " ind2=" "> <marc:subfield code="a">eng</marc:subfield> </marc:datafield> <marc:datafield tag="650" ind1=" " ind2="4"> <marc:subfield code="a">Byzantine</marc:subfield> </marc:datafield> <marc:datafield ind2="4" ind1=" " tag="650"> <marc:subfield code="a">General works</marc:subfield> <marc:subfield code="x">Historians and critics</marc:subfield> <marc:subfield code="x">Lange, Julius Henrik, 1838-1896</marc:subfield> </marc:datafield> </marc:record> </marc:collection> I've tried using examples that I've found that did lookups, but none of them seemed to work. I didn't include any of my XSL, because all of my results were disasterous. I keep looking at it, like it must be simple, but I'm just not getting any decent results. Any help or pointers would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >