Search Results

Search found 11010 results on 441 pages for 'txt record'.

Page 69/441 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • How to get only one record for each duplicate rows of the id in oracle?

    - by Psychocryo
    suppose i have this table: group_id | image | image_id | ----------------------------- 23 blob 1 23 blob 2 23 blob 3 21 blob 4 21 blob 5 25 blob 6 25 blob 7 how to get results of only 1 of each group id? in this case,there may be multiple images for one group id, i just want one result of each group_id i tried distinct but i will only get group_id. max for image also would not work.

    Read the article

  • :from parameter in active record find not well designed?

    - by potlee
    i got this error: SQLite3::SQLException: no such column: apis.name: SELECT * FROM examples WHERE ("apis"."name" = 'deep') my code Api.find :all, :from => params[:table_name], :conditions => {:name => 'deep' } I need to make a back end rails application which will be used by a silverlight application. one of the requirements is to fetch simple data from the database. i need to be able to query different tables with the same code.(my app has 2000 tables!) i think it does not make sense for rails to put in "apis" in the WHERE clause. is there any speciic reason for this?

    Read the article

  • In Django, using __init__() method of non-abstract parent model to record class name of child model

    - by k-g-f
    In my Django project, I have a non-abstract parent model defined as follows: class Parent(models.Model): classType = models.CharField(editable=False,max_length=50) and, say, two children models defined as follows: class ChildA(Parent): parent = models.OneToOneField(Parent,parent_link=True) class ChildB(Parent): parent = models.OneToOneField(Parent,parent_link=True) Each time I create an instance of ChildA or of ChildB, I'd like the classType attribute to be set to the strings "ChildA" or "ChildB" respectively. What I have done is added an _ _ init_ _() method to Parent as follows: class Parent(models.Model): classType = models.CharField(editable=False,max_length=50) def __init__(self,*args,**kwargs): super(Parent,self).__init__(*args,**kwargs) self.classType = self.__class__.__name__ Is there a better way to implement and achieve my desired result? One downside of this implementation is that when I have an instance of the Parent, say "parent", and I want to get the type of the child object linked with "parent", calling "parent.classType" gives me "Parent". In order to get the appropriate "ChildA" or "ChildB" value, I need to write a "_getClassType()" method to wrap a custom sql query.

    Read the article

  • How do I compare 2 fields and return the lowest value of each record?

    - by BigRob
    I'm slowly learning access to make a database of products and suppliers for my parents' business. What i've got is a table of products indexed by our product reference and 2 more tables for 2 different suppliers that contains the suppliers product reference and price that links with our reference. I've made a query that performs a left outer join such that it returns a table of our products with each supplier's reference and price, i.e: Ref | Product Name | Supplier 1 Ref | Supplier 1 Price | Supplier 2 Ref | Supplier 2 Price Here's the query I used: SELECT Catalog.Ref, Catalog.[Product Name], Catalog.Price, [D Products].[Supplier Ref], [D Products].Cost, [GS Products].[Supplier Ref], [GS Products].Cost FROM ([Catalog] LEFT JOIN [D Products] ON Catalog.Ref = [D Products].Ref) LEFT JOIN [GS Products] ON Catalog.Ref = [GS Products].Ref; Not all products are available from both suppliers, hence the outer join. What I want to do (with a query?) is to take the table produced by the query above and simply show the product reference, cheapest supplier reference and cheapest supplier price, i.e: Ref | Cheapest Suppplier Ref | Cheapest Supplier Price Unfortunately my SQL knowledge isn't quite good enough to figure this out, but if anyone can help i'd really appreciate it. Thanks, Rob

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • PostgreSQL 8.4 won't start after blackout

    - by RiZe
    I have problem with starting PostgreSQL 8.4 on Ubuntu 9.10 Server after blackout. When I try to connect to the database it says: psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. When I try to start it by using command sudo -u postgres /etc/init.d/postgresql-8.4 start * Starting PostgreSQL 8.4 database server [ OK ] Netstat output netstat -tulp (No info could be read for "-p": geteuid()=1000 but you should be root.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:postgresql *:* LISTEN - tcp 0 0 192.168.1.35:svn *:* LISTEN - tcp 0 0 192.168.1.35:http-alt *:* LISTEN - tcp 0 0 *:ssh *:* LISTEN - tcp6 0 0 localhost:postgresql [::]:* LISTEN - tcp6 0 0 [::]:ssh [::]:* LISTEN - udp 0 0 *:bootpc *:* - But still don't work so lets restart it sudo -u postgres /etc/init.d/postgresql-8.4 restart * Restarting PostgreSQL 8.4 database server * The PostgreSQL server failed to start. Please check the log output: 2009-11-30 13:39:37 CET LOG: database system was shut down at 2009-11-30 13:39:33 CET 2009-11-30 13:39:37 CET LOG: autovacuum launcher started 2009-11-30 13:39:37 CET LOG: database system is ready to accept connections 2009-11-30 13:39:37 CET LOG: incomplete startup packet 2009-11-30 13:39:38 CET LOG: server process (PID 2240) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:37 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D464C 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:38 CET LOG: autovacuum launcher started 2009-11-30 13:39:38 CET LOG: database system is ready to accept connections 2009-11-30 13:39:38 CET LOG: server process (PID 2248) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:38 CET LOG: terminating any other active server processes 2009-11-30 13:39:38 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:38 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:38 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:38 CET LOG: record with zero length at 0/11D4690 2009-11-30 13:39:38 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2256) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:38 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:39 CET LOG: record with zero length at 0/11D46D4 2009-11-30 13:39:39 CET LOG: redo is not required 2009-11-30 13:39:39 CET LOG: autovacuum launcher started 2009-11-30 13:39:39 CET LOG: database system is ready to accept connections 2009-11-30 13:39:39 CET LOG: server process (PID 2264) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:39 CET LOG: terminating any other active server processes 2009-11-30 13:39:39 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:39 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:39 CET 2009-11-30 13:39:39 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D4718 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:40 CET LOG: server process (PID 2272) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:40 CET LOG: terminating any other active server processes 2009-11-30 13:39:40 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:40 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:40 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:40 CET LOG: record with zero length at 0/11D475C 2009-11-30 13:39:40 CET LOG: redo is not required 2009-11-30 13:39:40 CET LOG: autovacuum launcher started 2009-11-30 13:39:40 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2280) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:40 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47A0 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:41 CET LOG: server process (PID 2288) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:41 CET LOG: terminating any other active server processes 2009-11-30 13:39:41 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:41 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:41 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:41 CET LOG: record with zero length at 0/11D47E4 2009-11-30 13:39:41 CET LOG: redo is not required 2009-11-30 13:39:41 CET LOG: autovacuum launcher started 2009-11-30 13:39:41 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2296) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:41 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D4828 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:42 CET LOG: autovacuum launcher started 2009-11-30 13:39:42 CET LOG: database system is ready to accept connections 2009-11-30 13:39:42 CET LOG: server process (PID 2304) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:42 CET LOG: terminating any other active server processes 2009-11-30 13:39:42 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:42 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:42 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:42 CET LOG: record with zero length at 0/11D486C 2009-11-30 13:39:42 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections 2009-11-30 13:39:43 CET LOG: server process (PID 2312) was terminated by signal 11: Segmentation fault 2009-11-30 13:39:43 CET LOG: terminating any other active server processes 2009-11-30 13:39:43 CET LOG: all server processes terminated; reinitializing 2009-11-30 13:39:43 CET LOG: database system was interrupted; last known up at 2009-11-30 13:39:42 CET 2009-11-30 13:39:43 CET LOG: database system was not properly shut down; automatic recovery in progress 2009-11-30 13:39:43 CET LOG: record with zero length at 0/11D48B0 2009-11-30 13:39:43 CET LOG: redo is not required 2009-11-30 13:39:43 CET LOG: autovacuum launcher started 2009-11-30 13:39:43 CET LOG: database system is ready to accept connections [fail] So what happened and what can I do to solve this? Thanks for replies

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • Python- Convert a mixed number to a float

    - by user345660
    I want to make a function that converts mixed numbers and fractions (as strings) to floats. Here's some examples: '1 1/2' -> 1.5 '11/2' -> 5.5 '7/8' -> 0.875 '3' -> 3 '7.7' -> 7.7 I'm currently using this function, but I think it could be improved. It also doesn't handle numbers that are already in decimal representation def mixedtofloat(txt): mixednum = re.compile("(\\d+) (\\d+)\\/(\\d+)",re.IGNORECASE|re.DOTALL) fraction = re.compile("(\\d+)\\/(\\d+)",re.IGNORECASE|re.DOTALL) integer = re.compile("(\\d+)",re.IGNORECASE|re.DOTALL) m = mixednum.search(txt) n = fraction.search(txt) o = integer.search(txt) if m: return float(m.group(1))+(float(m.group(2))/float(m.group(3))) elif n: return float(n.group(1))/float(n.group(2)) elif o: return float(o.group(1)) else: return txt Thanks!

    Read the article

  • How to sort hashmap?

    - by agazerboy
    Hi All ! I have hashmap and its keys are like "folder/1.txt,folder/2.txt,folder/3.txt" and value has these text files data. Now i am stucked. I want to sort this list. But it does not let me do it :( Here is my hashmap data type: HashMap<String, ArrayList<String>> following function work good but it is for arraylist not for hashmap. Collections.sort(values, Collections.reverseOrder()); I also tried MapTree but it also didn't work, or may be I couldn't make it work. I used following steps to sort the code with maptree HashMap testMap = new HashMap(); Map sortedMap = new TreeMap(testMap); any other way to do it?? I have one doubt as my keys are (folder/1.txt, folder/2.txt ) may be that's why?

    Read the article

  • piping findstr's output

    - by Gauthier
    Windows command line, I want to search a file for all rows starting with: # NNN "<file>.inc" where NNN is a number and <file> any string. I want to use findstr, because I cannot require that the users of the script install ack. Here is the expression I came up with: >findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9_]*.inc" all_pre.txt The file to search is all_pre.txt. So far so good. Now I want to pipe that to another command, say for example more. >findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9]*.inc" all_pre.txt | more The result of this is the same output as the previous command, but with the file name as prefix for every row (all_pre.txt). Then comes: FINDSTR: cannot open | FINDSTR: cannot open more Why doesn't the pipe work? snip of the content of all_pre.txt # 1 "main.ss" # 7 "main.ss" # 11 "main.ss" # 52 "main.ss" # 1 "Build_flags.inc" # 7 "Build_flags.inc" # 11 "Build_flags.inc" # 20 "Build_flags.inc"

    Read the article

  • Passing sql results to views hard-codes views to database column names

    - by Galen
    I just realized that i may not be following best practices in regards to the MVC pattern. My issue is that my views "know" information about my database Here's my situation in psuedo code... My controller invokes a method from my model and passes it directly to the view view.records = tableGateway.getRecords() view.display() in my view each records as record print record.name print record.address ... In my view i have record.name and record.address, info that's hard-coded to my database. Is this bad? What other ways around it are there other than iterating over everything in the controller and basically rewriting the records collection. And that just seems silly. Thanks

    Read the article

  • Memory efficient import many data files into panda DataFrame in Python

    - by richardh
    I import into a panda DataFrame a directory of |-delimited.dat files. The following code works, but I eventually run out of RAM with a MemoryError:. import pandas as pd import glob temp = [] dataDir = 'C:/users/richard/research/data/edgar/masterfiles' for dataFile in glob.glob(dataDir + '/master_*.dat'): print dataFile temp.append(pd.read_table(dataFile, delimiter='|', header=0)) masterAll = pd.concat(temp) Is there a more memory efficient approach? Or should I go whole hog to a database? (I will move to a database eventually, but I am baby stepping my move to pandas.) Thanks! FWIW, here is the head of an example .dat file: cik|cname|ftype|date|fileloc 1000032|BINCH JAMES G|4|2011-03-08|edgar/data/1000032/0001181431-11-016512.txt 1000045|NICHOLAS FINANCIAL INC|10-Q|2011-02-11|edgar/data/1000045/0001193125-11-031933.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-11|edgar/data/1000045/0001193125-11-005531.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-27|edgar/data/1000045/0001193125-11-015631.txt 1000045|NICHOLAS FINANCIAL INC|SC 13G/A|2011-02-14|edgar/data/1000045/0000929638-11-00151.txt

    Read the article

  • DataMapper is only returning the last part of this query

    - by Josh K
    So I'm using the following: $r = new Record(); $r->select('ip, count(*) as ipcount'); $r->group_by('ip'); $r->order_by('ipcount', 'desc'); $r->limit(5); $r->get(); foreach($r->all as $record) { echo($record->ip." "); echo($record->ipcount." <br />"); } And I only get the last (fifth) record echo'ed out and no ipcount echoed. Is there a different way to go around this? I'm working on learning DataMapper (hence the questions) and need to figure some of this out. I haven't quite wrapped my head around the whole ORM thing.

    Read the article

  • Guessing UTF-8 encoding

    - by Dervin Thunk
    I have a question that may be quite naive, but I feel the need to ask, because I don't really know what is going on. I'm on Ubuntu. Suppose I do echo "t" > test.txt if I then file test.txt I get test.txt:ASCII text If I then do echo "å" > test.txt Then I get test.txt: UTF-8 Unicode text How does that happen? How does file "know" the encoding, or, alternatively, how does it guess it? Thanks.

    Read the article

  • UpdateModel prefix - ASP.NET MVC

    - by Kristoffer Ahl
    I'm having trouble with TryUpdateModel(). My form fields are named with a prefix but I am using - as my separator and not the default dot. <input type="text" id="Record-Title" name="Record-Title" /> When I try to update the model it does not get updated. If i change the name attribute to Record.Title it works perfectly but that is not what I want to do. bool success = TryUpdateModel(record, "Record"); Is it possible to use a custom separator?

    Read the article

  • Printing elements out of list

    - by chavanak
    Hi, I have a certain check to be done and if the check satisfies, I want the result to be printed. Below is the code: import string import codecs import sys y=sys.argv[1] list_1=[] f=1.0 x=0.05 write_in = open ("new_file.txt", "w") write_in_1 = open ("new_file_1.txt", "w") ligand_file=open( y, "r" ) #Open the receptor.txt file ligand_lines=ligand_file.readlines() # Read all the lines into the array ligand_lines=map( string.strip, ligand_lines ) #Remove the newline character from all the pdb file names ligand_file.close() ligand_file=open( "unique_count_c_from_ac.txt", "r" ) #Open the receptor.txt file ligand_lines_1=ligand_file.readlines() # Read all the lines into the array ligand_lines_1=map( string.strip, ligand_lines_1 ) #Remove the newline character from all the pdb file names ligand_file.close() s=[] for i in ligand_lines: for j in ligand_lines_1: j = j.split() if i == j[1]: print j The above code works great but when I print j, it prints like ['351', '342'] but I am expecting to get 351 342 (with one space in between). Since it is more of a python question, I have not included the input files (basically they are just numbers). Can anyone help me? Cheers, Chavanak

    Read the article

  • How can I view multiple git diffs side by side in vim

    - by Pete Hodgson
    I'd like to be able to run a command that opens up a git diff in vim, with a tab for each file in the diff set. So if for example I've changed files foo.txt and bar.txt in my working tree and I ran the command I would see vim open with two tabs. The first tab would contain a side-by-side diff between foo.txt in my working tree and foo.txt in the repository, and the second tab would contain a side-by-side diff for bar.txt. Anyone got any ideas?

    Read the article

  • File Encryption Operation

    - by kiruthika
    Hi All, I have doubt in gpg command operation . Actually we are using gpg command for encrypting the file . File.txt has following things. Testing hello world My security things. Now I am doing the file encryption for File.txt gpg --symmetric File.txt Now I got File.txt.gpg file , which is encrypted. My doubt, if some open that file and did someone changes in that I am not able get file content . It says my following things. $ gpg --decrypt File.txt.gpg gpg: no valid OpenPGP data found. gpg: decrypt_message failed: eof I want my file content , even though some body has done changes in that . what should I do for this problem....?

    Read the article

  • Read entire file in Scala?

    - by Brendan OConnor
    What's a simple and canonical way to read an entire file into memory in Scala? (Ideally, with control over character encoding.) The best I can come up with is: scala.io.Source.fromPath("file.txt").getLines.reduceLeft(_+_) or am I supposed to use one of Java's god-awful idioms, the best of which (without using an external library) seems to be: import java.util.Scanner import java.io.File new Scanner(new File("file.txt")).useDelimiter("\\Z").next() From reading mailing list discussions, it's not clear to me that scala.io.Source is even supposed to be the canonical I/O library. I don't understand what its intended purpose is, exactly. ... I'd like something dead-simple and easy to remember. For example, in these languages it's very hard to forget the idiom ... Ruby open("file.txt").read Ruby File.read("file.txt") Python open("file.txt").read()

    Read the article

  • file operations

    - by devtech
    I am writing a script for renaming/copying the content of a file. I have temp & temp1 folders. Inside temp1 I have 1000 .txt files with names 1.txt, 2.txt, 3.txt ... & in temp folder I have sub directories named 1, 2, 3. In the folders 1, 2, 3 ..., I have one file with extension java or xml. My need is to automate the copy of the contents of the .txt file, and paste into the respective file in the temp sub directory. Is it possible to do this using Perl?

    Read the article

  • Shell script to process files

    - by Harish
    I need to write a Shell Script to process a huge folder of nearly 20 levels.I have to process each and every file and check which files contain lines like select insert update When I mean line it should take the line till I find a semicolon in that file. I should get a result like this C:/test.java select * from dual C:/test.java select * from test C:/test1.java select * from tester C:/test1.java select * from dual and so on.Right now I have a script to read all the files #!bin/ksh FILE=<FILEPATH to be traversed> TEMPFILE=<Location of Temp file> cd $FILE for f in `find . ! -type d`; do cat $FILE/addedText.txt>>$TEMPFILE/newFile.txt cat $f>>$TEMPFILE/newFile.txt rm $f cat $TEMPFILE/newFile.txt>>$f rm $TEMPFILE/newFile.txt done I have very little knowledge of awk and sed to proceed further in reading each file and achieve what I want to.Can anyone help me in this

    Read the article

  • Authlogic: Create records on other users' behalf

    - by Friðrik
    Hi Using Authlogic, what is the best way to create a record in rails on other users' behalf? Description: I have a c++ server which handles Tcp connections from many c++ clients, and I want the c++ server to create a new record in the rails database using its REST api. However, the c++ server needs to be authenticated before creating that record. What I want is to attach the original user ID (from the c++ client) to the record (but not the servers) so I know from which user the record came from. One way is for the c++ client to send its persistence token over to the c++ server which sends that token as a parameter to the create action, does that make sense? or are there maybe some better ways to do this? I have a rails app which uses authlogic for authentication. I also have another c++ client which is logs in and provides I have a c++ server which uses

    Read the article

  • Scrapped HTML does not written at the beginning of text file.

    - by karikari
    Currently, I'm scraping the HTML code of a page, and write it to a txt file. My problem is, why there must be empty spaces or empty lines at the beginning? The HTML codes written to the txt file, seems does not start at the beginning of the txt file. Means, the '<' does not located at the position 0 of the txt file. After a few runs, my HTML is always written a few lines down inside the txt file. Can anyone tell me why : )

    Read the article

  • Solr date range problem and specific date problem

    - by Hamid
    I index some data include date in solr but when search for specific date, i get some record (not all record) include some record in next day for example: http://localhost:8080/solr/select/?q=pubdate:[2010-03-25T00:00:00Z TO 2010-03-25T23:59:59Z]&start=0&rows=10&indent=on&sort=pubdate desc i have 625000 record in 2010-03-25 but above query result return 325412 that include 14 record from 2010-03-26. Also i try with below query, but not get right result http://localhost:8080/solr/select/?q=pubdate:"2010-03-25T00:00:00Z"&start=0&rows=10&indent=on&sort=pubdate desc How to get right result for specific date ??? Could you please help me? Thanks in advanced Hamid

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >