Search Results

Search found 89638 results on 3586 pages for 'file table'.

Page 43/3586 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • afp/smb transfers caps at 2 megabytes/sec, wireless N

    - by RD.
    I wanted to transfer files between two mac computers. The network is wireless-N and both computers have wireless-N modules in them. The problem is that when I transfer files between them, via file sharing (afp) the network speed caps at 2 megabytes/sec. Just downloading files from the internet I can get faster speeds, so this isn't a constriction of my wifi bandwidth, it appears to be a constriction of the protocol being used. My wifi-n is set to 130mbits, so I should see real world transfer speeds around 12-16 megabytes/sec I did this command on both computers sudo sysctl -w net.inet.tcp.delayed_ack=0 which is supposed to lower tcp overhead, but this did not affect it. How can I get the speed I am expecting?

    Read the article

  • Prevent the "System" process from locking my files in a shared folder.

    - by Kamarey
    I have an application that creates files to be processed by SQL bulk. The files are created in shared folder on another server and than taken from there by SQL. The problem that sometime SQL returns an error, that the file is locked by another process and can't be accessed. The process that locks these files is "System" process. Looks like it lock files because of they are in a shared folder, but not sure. The use of any software to unlock files manually is not an option, as all bulk process is automatic. The question is: Why the "System" process locks these files and is there a way to prevent this?

    Read the article

  • How to extract hhp file from a chm file

    - by Sam
    Hi, I have an A.chm file for my windows application which runs as expected. When I decompile it using HTML workshop I get set of html files, .hhc file, .hhk file. When I compile another file B.chm from these extracted files without changing any of the files.((I want to add more html contents to this file but looks like I am losing some information after decompiling)) The output file I get is 72K where as the original file was 75K. B.chm's contents look all file when viewed in the chm viewer but the behavior is lost when when used with the application. After reading around I found that if .hhp can be extracted from a .chm file then it can be re-constructed as it is without losing any mapping or aliases. Is that true? How can I extract .hhp file from a .chm file? Thanks, Sam

    Read the article

  • Getting a Temporary Table Returned from from Dynamic SQL in SQL Server 05, and parsing

    - by gloomy.penguin
    So I was requested to make a few things.... (it is Monday morning and for some reason this whole thing is turning out to be really hard for me to explain so I am just going to try and post a lot of my code; sorry) First, I needed a table: CREATE TABLE TICKET_INFORMATION ( TICKET_INFO_ID INT IDENTITY(1,1) NOT NULL, TICKET_TYPE INT, TARGET_ID INT, TARGET_NAME VARCHAR(100), INFORMATION VARCHAR(MAX), TIME_STAMP DATETIME DEFAULT GETUTCDATE() ) -- insert this row for testing... INSERT INTO TICKET_INFORMATION (TICKET_TYPE, TARGET_ID, TARGET_NAME, INFORMATION) VALUES (1,1,'RT_ID','IF_ID,int=1&IF_ID,int=2&OTHER,varchar(10)=val,ue3&OTHER,varchar(10)=val,ue4') The Information column holds data that needs to be parsed into a table. This is where I am having problems. In the resulting table, Target_Name needs to become a column that holds Target_ID as a value for each row in the resulting table. The string that needs to be parsed is in this format: @var_name1,@var_datatype1=@var_value1&@var_name2,@var_datatype2=@var_value2&@var_name3,@var_datatype3=@var_value3 And what I ultimately need as a result (in a table or table variable): RT_ID IF_ID OTHER 1 1 val,ue3 1 2 val,ue3 1 1 val,ue4 1 2 val,ue4 And I need to be able to join on the result. Initially, I was just going to make this a function that returns a table variable but for some reason I can't figure out how to get it into an actual table variable. Whatever parses the string needs to be able to be used directly in queries so I don't think a stored procedure is really the right thing to be using. This is the code that parses the Information string... it returns in a temporary table. -- create/empty temp table for var_name, var_type and var_value fields if OBJECT_ID('tempdb..#temp') is not null drop table #temp create table #temp (row int identity(1,1), var_name varchar(max), var_type varchar(30), var_value varchar(max)) -- just setting stuff up declare @target_name varchar(max), @target_id varchar(max), @info varchar(max) set @target_name = (select target_name from ticket_information where ticket_info_id = 1) set @target_id = (select target_id from ticket_information where ticket_info_id = 1) set @info = (select information from ticket_information where ticket_info_id = 1) --print @info -- some of these variables are re-used later declare @col_type varchar(20), @query varchar(max), @select as varchar(max) set @query = 'select ' + @target_id + ' as ' + @target_name + ' into #target; ' set @select = 'select * into ##global_temp from #target' declare @var_name varchar(100), @var_type varchar(100), @var_value varchar(100) declare @comma_pos int, @equal_pos int, @amp_pos int set @comma_pos = 1 set @equal_pos = 1 set @amp_pos = 0 -- while loop to parse the string into a table while @amp_pos < len(@info) begin -- get new comma position set @comma_pos = charindex(',',@info,@amp_pos+1) -- get new equal position set @equal_pos = charindex('=',@info,@amp_pos+1) -- set stuff that is going into the table set @var_name = substring(@info,@amp_pos+1,@comma_pos-@amp_pos-1) set @var_type = substring(@info,@comma_pos+1,@equal_pos-@comma_pos-1) -- get new ampersand position set @amp_pos = charindex('&',@info,@amp_pos+1) if @amp_pos=0 or @amp_pos<@equal_pos set @amp_pos = len(@info)+1 -- set last variable for insert into table set @var_value = substring(@info,@equal_pos+1,@amp_pos-@equal_pos-1) -- put stuff into the temp table insert into #temp (var_name, var_type, var_value) values (@var_name, @var_type, @var_value) -- is this a new field? if ((select count(*) from #temp where var_name = (@var_name)) = 1) begin set @query = @query + ' create table #' + @var_name + '_temp (' + @var_name + ' ' + @var_type + '); ' set @select = @select + ', #' + @var_name + '_temp ' end set @query = @query + ' insert into #' + @var_name + '_temp values (''' + @var_value + '''); ' end if OBJECT_ID('tempdb..##global_temp') is not null drop table ##global_temp exec (@query + @select) --select @query --select @select select * from ##global_temp Okay. So, the result I want and need is now in ##global_temp. How do I put all of that into something that can be returned from a function (or something)? Or can I get something more useful returned from the exec statement? In the end, the results of the parsed string need to be in a table that can be joined on and used... Ideally this would have been a view but I guess it can't with all the processing that needs to be done on that information string. Ideas? Thanks!

    Read the article

  • LaTeX table too wide. How to make it fit?

    - by Erik B
    I just started to learn latex and now I'm trying to create a table. This is my code: \begin{table} \caption{Top Scorers} \begin{tabular}{ l l } \hline \bf Goals & \bf Players\\ \hline 4 & First Last, First Last, First Last, First Last\\ 3 & First Last\\ 2 & First Last\\ 1 & First Last, First Last, First Last, First Last, First Last, First Last, First Last, First Last, First Last, First Last, First Last, First Last, First Last\\ \hline \end{tabular} \end{table} The problem is that the table is wider than the page. I was hoping that it would automatically fit to the page like normal text does, but it didn't. How do I tell latex to make the table fit to the page?

    Read the article

  • How to make a table, which is wider than screen size, scrollable?

    - by understack
    I've a table with 2 columns and each column is 800px wide. I want to show this table in 800x50 window. So there should be horizontal and vertical scrollbar to view complete table. While I've found few related solutions (this and this) on SO, they only work if table width is smaller than screen size. In my case screen size is 1200px and total table width is 1600px. How could I do this? i want to achieve something like this.

    Read the article

  • How to make last line of each table's part unfinished in latex longtable?

    - by diver_ru
    I have a table that automatically stretched over several pages by longtable package. \begin{longtable}{| l | l |} \hline A & B \\ \hline \endfirsthead \multicolumn{3}{l}{Table \thetable{} -- finishing} \\ \hline \endhead a1 & b1 \\ \hline a1 & b2 \\ hline ........ \end{longtable} Suppose that table broken (automatically) between first and second lines. Now i have this: ------- |A | B| ------- |a1|b1| ------- <page break> Table 1 -- finishing. ------- |a2|b2| ------- I want the following effect: ------- |A | B| ------- |a1|b1| <page break> Table 1 -- finishing. ------- |a2|b2| ------- I.e. last line of broken part should be unfinished.

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Problem with squid log files

    - by Gatura
    I am using SARG to get a report on the squid log files, I get this result /usr/local/Sarg/bin/sarg -l /usr/local/squid/var/logs/access.log SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% sort: open failed: +6.5nr: No such file or directory SARG: (index) Cannot open file: /Applications/Sarg/reports/index.sort SARG: Records in file: 0, reading: 0.00% What could be the problem?

    Read the article

  • Expanding iSCSI LUNs (NTFS)

    - by Fatih
    I have a 4TB iSCSI LUN that formated as NTFS in Windows 2008. I've shared this formated volume as a folder over SMB. When the capacity of this volume is not enough, I have to add more iSCSI LUNs, but the end-users must see only the folder that I've shared before. So, when I expand the NTFS volume that is currently 4TB, with more iSCSI LUNS(for example 2 more 4TB LUN), if one of the luns is failed, or missing, will all of my data in the folder be lost? I imagine that the expanding ntfs volume is like RAID 0(striped). if it is like RAID 0, then all my data will be lost when one of the luns is failed, or missing. In brief, there are two questions in here: 1- What will be happened, if one of the luns is missing in an expanded ntfs volume? 2- Is there another way to merge all of iscsi luns as only a folder, in that way the users don't see any extra folder even if I add extra iscsi luns to the file server.(I don't mention about DFS) Regards.

    Read the article

  • $.fadeTo/fadeOut() operations on Table Rows in IE fail

    - by Rick Strahl
    Here’s a a small problem that one of customers ran into a few days ago: He was playing around with some of the sample code I’ve put out for one of my simple jQuery demos which deals with providing a simple pulse behavior plug-in: $.fn.pulse = function(time) { if (!time) time = 2000; // *** this == jQuery object that contains selections $(this).fadeTo(time, 0.20, function() { $(this).fadeTo(time, 1); }); return this; } it’s a very simplistic plug-in and it works fine for simple pulse animations. However he ran into a problem where it didn’t work when working with tables – specifically pulsing a table row in Internet Explorer. Works fine in FireFox and Chrome, but IE not so much. It also works just fine in IE as long as you don’t try it on tables or table rows specifically. Applying against something like this (an ASP.NET GridView): var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.pulse(2000); fails in IE. No pulsing happens in any version of IE. After some additional experimentation with single rows and various ways of selecting each and still failing, I’ve come to the conclusion that the various fade operations in jQuery simply won’t work correctly in IE (any version). So even something as ‘elemental’ as this: var el = $("#gdEntries>tbody>tr").get(0);$(el).fadeOut(2000); is not working correctly. The item will stick around for 2 seconds and then magically disappear. Likewise: sel.hide().fadeIn(5000); also doesn’t fade in although the items become immediately visible in IE. Go figure that behavior out. Thanks to a tweet from red_square and a link he provided here is a grid that explains what works and doesn’t in IE (and most last gen browsers) regarding opacity: http://www.quirksmode.org/js/opacity.html It appears from this link that table and row elements can’t be made opaque, but td elements can. This means for the row selections I can force each of the td elements to be selected and then pulse all of those. Once you have the rows it’s easy to explicitly select all the columns in those rows with .find(“td”). Aha the following actually works: var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.find("td").pulse(2000); A little unintuitive that, but it works. Stay away from <table> and <tr> Fades The moral of the story is – stay away from TR, TH and TABLE fades and opacity. If you have to do it on tables use the columns instead and if necessary use .find(“td”) on your row(s) selector to grab all the columns. I’ve been surprised by this uhm relevation, since I use fadeOut in almost every one of my applications for deletion of items and row deletions from grids are not uncommon especially in older apps. But it turns out that fadeOut actually works in terms of behavior: It removes the item when the timeout’s done and because the fade is relatively short lived and I don’t extensively test IE code any more I just never noticed that the fade wasn’t happening. Note – this behavior or rather lack thereof appears to be specific to table table,tr,th elements. I see no problems with other elements like <div> and <li> items. Chalk this one up to another of IE’s shortcomings. Incidentally I’m not the only one who has failed to address this in my simplistic plug-in: The jquery-ui pulsate effect also fails on the table rows in the same way. sel.effect("pulsate", { times: 3 }, 2000); and it also works with the same workaround. If you’re already using jquery-ui definitely use this version of the plugin which provides a few more options… Bottom line: be careful with table based fade operations and remember that if you do need to fade – fade on columns.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • iTextSharp - Bug in the table functions?

    - by Matthias
    Hello all together, I try to make a table like this: PdfPTable Table = new PdfPTable(6); PdfPCell Cell = new PdfPCell(new Phrase("a", Font1)); Cell.Rowspan = 2; Cell.Colspan = 2; Table.AddCell(Cell); Cell = new PdfPCell(new Phrase("b", Font1)); Cell.Rowspan = 2; Cell.Colspan = 2; Table.AddCell(Cell); Cell = new PdfPCell(new Phrase("c", Font1)); Cell.Colspan = 2; Table.AddCell(Cell); Cell = new PdfPCell(new Phrase("d", Font1)); Cell.Colspan = 2; Table.AddCell(Cell); That works fine. But changing the number of columns will destory the table. Is it a bug or do I make something wrong? This code destroys the table: PdfPTable Table = new PdfPTable(17); PdfPCell Cell = new PdfPCell(new Phrase("a", Font1)); Cell.Rowspan = 2; Cell.Colspan = 2; Table.AddCell(Cell); Cell = new PdfPCell(new Phrase("b", Font1)); Cell.Rowspan = 2; Cell.Colspan = 10; Table.AddCell(Cell); Cell = new PdfPCell(new Phrase("c", Font1)); Cell.Colspan = 5; Table.AddCell(Cell); Cell = new PdfPCell(new Phrase("d", Font1)); Cell.Colspan = 5; Table.AddCell(Cell); Edit: The table should have this layout: |-------------------------------------------------------| | Cell "a" with | Cell "b" with | Cell "c", colspan = 5 | | colspan = 2 | colspan = 10 |-----------------------| | rowspan = 2 | rowspan = 2 | Cell "d", colspan = 5 | |-------------------------------------------------------| Best regards, Matthias

    Read the article

  • nginx: rewrite a non-existent php-file to another php-file with all arguments

    - by at0m33
    i really need help here. Sitting for some time now and dont figured it out. I want to realize a very simple task - rewrite a non-existent php file to another existant php file with all arguments like: this http://example.com/nonexistent.php?url=google.com to -> http://example.com/existent.php?url=google.com I tried something like this: rewrite ^/nonexistent.php /existent.php; Which dont works (File not found). But redirect a non-existent html file to a php file like this: rewrite ^/nonexistent.html /existent.php; works. I dont want to rewrite a html file, but this is still a confusing behaviour. Therefore it tried also something like this (and some variations): rewrite ^/nonexistent.php?url=^(.*)$ /existent.php?url=$1; which is also not working. (Maybe the syntax is bad) Any help here? It would be very nice!

    Read the article

  • How to create a text file from column and FTP that text file to server

    - by addi
    I have workbook with 2 sheets (Sheet1 and sheet2). On sheet1 user will enter the data which will be populated in the column B and then column C will hold the values from Col A and B on sheet2. I need to create a text file from the values in coloumn C on a click of a button and then upload(FTP) that file to a server. So the sheet1 will have 2 buttons. Button1 will save the excel file and create the text file in windows temp directory. e.g text.xls text.prop (text file whoch has all the values in column C on sheet2) Button2 will upload (FTP) the text file (.prop) to a server. Can anyone please send me the steps and VB code to achieve the above tasks? Thanks in Advance Addi

    Read the article

  • iPhone file corruption

    - by sfider
    Is it possible (on iPhone/iPod Touch) for a file written like this: if (FILE* file = fopen(filename, "wb")) { fwrite(buf, buf_size, 1, file); fclose(file); } to get corrupted, e.g. when app is forced to terminate? From what I know fwrite should be an atomic operation, so when I write whole file with one instruction no corruption should occure. I could not find any information on the net that would say otherwise.

    Read the article

  • Two n x m relationships with the same table in mysql

    - by Christian
    I want to create a database in which there's an n x m relationship between the table drug and the table article and an n x m relationship between the table target and the table article. I get the error: Cannot delete or update a parent row: a foreign key constraint fails What do I have to change in my code? DROP TABLE IF EXISTS `textmine`.`article`; CREATE TABLE `textmine`.`article` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Pubmed ID', `abstract` blob NOT NULL, `authors` blob NOT NULL, `journal` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`drugs`; CREATE TABLE `textmine`.`drugs` ( `id` int(10) unsigned NOT NULL COMMENT 'This ID is taken from the biosemantics dictionary', `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`targets`; CREATE TABLE `textmine`.`targets` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `primaryName` varchar(256) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`containstarget`; CREATE TABLE `textmine`.`containstarget` ( `targetid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `target` (`targetid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `target` FOREIGN KEY (`targetid`) REFERENCES `targets` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `textmine`.`contiansdrug`; CREATE TABLE `textmine`.`contiansdrug` ( `drugid` int(10) unsigned NOT NULL, `articleid` int(10) unsigned NOT NULL, KEY `drug` (`drugid`), KEY `article` (`articleid`), CONSTRAINT `article` FOREIGN KEY (`articleid`) REFERENCES `article` (`id`), CONSTRAINT `drug` FOREIGN KEY (`drugid`) REFERENCES `drugs` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

    Read the article

  • Is O_NONBLOCK being set a property of the file descriptor or underlying file?

    - by Daniel Trebbien
    From what I have been reading on The Open Group website on fcntl, open, read, and write, I get the impression that whether O_NONBLOCK is set on a file descriptor, and hence whether non-blocking I/O is used with the descriptor, should be a property of that file descriptor rather than the underlying file. Being a property of the file descriptor means, for example, that if I duplicate a file descriptor or open another descriptor to the same file, then I can use blocking I/O with one and non-blocking I/O with the other. Experimenting with a FIFO, however, it appears that it is not possible to have a blocking I/O descriptor and non-blocking I/O descriptor to the FIFO simultaneously (so whether O_NONBLOCK is set is a property of the underlying file [the FIFO]): #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char **argv) { int fds[2]; if (pipe(fds) == -1) { fprintf(stderr, "`pipe` failed.\n"); return EXIT_FAILURE; } int fd0_dup = dup(fds[0]); if (fd0_dup <= STDERR_FILENO) { fprintf(stderr, "Failed to duplicate the read end\n"); return EXIT_FAILURE; } if (fds[0] == fd0_dup) { fprintf(stderr, "`fds[0]` should not equal `fd0_dup`.\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should not have `O_NONBLOCK` set.\n"); return EXIT_FAILURE; } if (fcntl(fd0_dup, F_SETFL, fcntl(fd0_dup, F_GETFL) | O_NONBLOCK) == -1) { fprintf(stderr, "Failed to set `O_NONBLOCK` on `fd0_dup`\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should still have `O_NONBLOCK` unset.\n"); return EXIT_FAILURE; // RETURNS HERE } char buf[1]; if (read(fd0_dup, buf, 1) != -1) { fprintf(stderr, "Expected `read` on `fd0_dup` to fail immediately\n"); return EXIT_FAILURE; } else if (errno != EAGAIN) { fprintf(stderr, "Expected `errno` to be `EAGAIN`\n"); return EXIT_FAILURE; } return EXIT_SUCCESS; } This leaves me thinking: is it ever possible to have a non-blocking I/O descriptor and blocking I/O descriptor to the same file and if so, does it depend on the type of file (regular file, FIFO, block special file, character special file, socket, etc.)?

    Read the article

  • File system query

    - by Balaji
    Is there an easy way to query data in file system? We are storing data in File system (instead of database) Is there a way to query the content of the file system? The data in the file system is stored in xml format. since the data is growing day by day we are finding it difficult to query the content of the files in the file system. Can anyone suggest what could be the tool/method to query the data in the existing file system?

    Read the article

  • including php file from another server with php

    - by ermac2014
    hi I have 2 php files each file on different server. lets say the first file called includeThis.php and the second called main.php the first file is located in (http://)www.sample.com/includeThis.php and the second located in (http://)www.mysite.com/main.php so now what I want is to include the first file into my second file. the contents of the first file is like: <?php $foo = "this is data from file one"; ?> the second file like: <?php include "http://www.sample.com/includeThis.php"; echo $foo; ?> is there any way I can do this? thanks in advance

    Read the article

  • How to work with file and streams in php,case: if we open file in Class A and pass open stream to Cl

    - by Rachel
    I have two class, one is Business Logic Class{BLO} and the other one is Data Access Class{DAO} and I have dependency in my BLO class to my Dao class. Basically am opening a csv file to write into it in my BLO class using inside its constructor as I am creating an object of BLO and passing in file from command prompt: Code: $this->fin = fopen($file,'w+') or die('Cannot open file'); Now inside BLO I have one function notifiy, which call has dependency to DAO class and call getCurrentDBSnapshot function from the Dao and passes the open stream so that data gets populated into the stream. Code: Blo Class Constructor: public function __construct($file) { //Open Unica File for parsing. $this->fin = fopen($file,'w+') or die('Cannot open file'); // Initialize the Repository DAO. $this->dao = new Dao('REPO'); } Blo Class method that interacts with Dao Method and call getCurrentDBSnapshot. public function notifiy() { $data = $this->fin; var_dump($data); //resource(9) of type (stream) $outputFile=$this->dao->getCurrentDBSnapshot($data); // So basically am passing in $data which is resource((9) of type (stream) } Dao function: getCurrentDBSnapshot which get current state of Database table. public function getCurrentDBSnapshot($data) { $query = "SELECT * FROM myTable"; //Basically just preparing the query. $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); $header = array(); while ($row=$stmt->fetch(PDO::FETCH_ASSOC)) { if(empty($header)) { // Get the header values from table(columnnames) $header = array_keys($row); // Populate csv file with header information from the mytable fputcsv($data, $header); } // Export every row to a file fputcsv($data, $row); } var_dump($data);//resource(9) of type (stream) return $data; } So basically in am getting back resource(9) of type (stream) from getCurrentDBSnapshot and am storing that stream into $outputFile in Blo class method notify. Now I want to close the file which I opened for writing and so it should be fclose of $outputFile or $data, because in both the cases it gives me: var_dump(fclose($outputFile)) as bool(false) var_dump(fclose($data)) as bool(false) and var_dump($outputFile) as resource(9) of type (Unknown) var_dump($data) as resource(9) of type (Unknown) My question is that if I open file using fopen in class A and if I call class B method from Class A method and pass an open stream, in our case $data, than Class B would perform some work and return back and open stream and so How can I close that open stream in Class A's method or it is ok to keep that stream open and not use fclose ? Would appreciate inputs as am not very sure as how this can be implemented.

    Read the article

  • Netlogo: error when putting variable in table, only constants allowe??

    - by Chantal
    Hello, Currently I am working on a Netlogo program where I need to use nodes and links for vehicle routing problem. (links are called streets in the program) Here I have some practical problems of how to input variable linkspeed in a table with another node. Constants like 200 etc are fine. Online I found some examples where variables are used, but I do not know why I keep getting the following error: Expected a constant. (or why netlogo expects a constant) Here is the relevant piece of code: extensions [table] streets-own [linkspeed linktoll] nodes-own [netw] ;; In another piece of code linkspeed is assigned successfully to the links to cheapcalc ;; start conditions set costs very high 300000 ;; state 3 unsearched state 2 searching state 1 searched (for later purposes) ask nodes [ set i 0 set j count nodes set netw table:make while [i < j][ table:put netw (i) [3000000 3] set i (i + 1)]] set i 0 let k 0 ask node 35 ;; here i use node 35 as an example. ;; node 35 is connected to node 34, 36, 20 and 50 [table:put netw (35) [0 1] ;; node need to search costs to travel to itself ;; putting constants is ok. while [i < j] [ask my-links [ask both-ends [if (who != 35) [set color blue ;; set temp ([linkspeed] of street 35 who) ;; here my real goal is to put this in stat of i. but i is easier than linkspeed. table:put netw (who) [ i 2 ] ] ] ] set i (i + 1)] ] ;; next node for later, no it is just repetition of the same. end I hope somebody knows what is going on... Kind regards, Chantal

    Read the article

  • Problem in Fetching Table contents when adding rows in same table

    - by jasmine
    Im trying to write a function for adding category: function addCategory() { $cname = mysql_fix_string($_POST['cname']); $kabst = mysql_fix_string($_POST['kabst']); $kselect = $_POST['kselect']; $kradio = $_POST['kradio']; $ksubmit = $_POST['ksubmit']; $id = $_POST['id']; if($ksubmit){ $query = "INSERT INTO category VALUES (' ', '{$cname}', '{$kabst}', {$kselect}, {$kradio}, ' ') "; $result = mysql_query($query); if ($result) { echo "ok"; } else{ echo $query ; } } $text .= '<div class="form"> <h2>ADD new category</h2> <form action="?page=addCategory" method="post"> <ul> <li><label>Category</label></li> <li><input name="cname" type="text" class="inp" /></li> <li><label>Description</label></li> <li><textarea name="kabst" cols="40" rows="10" class="inx"></textarea></li> <li>Published:</li> <li> <select name="kselect" class="ins"> <option value="1">Active</option> <option value="0">Passive</option> </select> </li> <li>Show in home page:</li> <li> <input type="radio" name="kradio" value="1" /> yes <input type="radio" name="kradio" value="0" /> no </li> <li>Subcategory of</li> <li> <select>'; while ($row = mysql_fetch_assoc(mysql_query("SELECT * FROM category"))){ $text .= '<option>'.$row['name'].'</option>'; } $text .= '</select> </li> <li><input name="ksubmit" type="submit" value="ekle" class="int"/></li> </ul> </form> '; return $text;} And the error: Fatal error: Maximum execution time of 30 seconds exceeded What is wrong in my function?

    Read the article

  • Saving file to phone in stead of SD-card

    - by Galip
    Hi guys, In my app I save an XML file to the users SD-card by doing File newxmlfile = new File(Environment.getExternalStorageDirectory() + "/Message.xml"); But not all users have SD-cards in their phone and therefore my app is likely to crash. How must I change my File creating method in order to save the file to the phone's memory instead of the SD-card? Also, how must I change the loading of the file? (currently: new InputSource(new FileInputStream(Environment.getExternalStorageDirectory() + "/Message.xml")))

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >