Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 245/387 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Launching Vim via Lua

    - by Keith Pimmel
    I'm writing a simple little Lua commandline app that will build a static website. I'm storing my fragments in a sqlite database. Retrieving the data from the db is straightforward as is saving it; my question comes from editing the data. Is there an elegant way to pipe the data from Lua to vim? Can vim edit a memory buffer and return it? I was planning on launching the editor via os.execute('vim') but only after grabbing a temporary file handle and dumping the database output into that. I would like to have to avoid touching the filesystem that way but that is my contingency plan.

    Read the article

  • C#, WinForms: Which view type for periodically updated list?

    - by rdoubleui
    I'm having an application, that periodically polls a web service (about every 10 seconds). In my application logic I'm having a List<Message> holding the messages. All messages have an id, and might be received out of order. Therefore the class implements the Comparable Interface. What WinForm control would fit to be regurarly updated (with the items in order). I plan to hold the last 500 messages. Should I sort the list and then update the whole form? Or is data binding approriate (concerning performance)?

    Read the article

  • Tool to convert inline C# into a code behind file

    - by Jon Jones
    Hi I have a number of legacy web controls (ascx) that contains huge amounts of inline C#. The forms contain a number of repeated and duplicate code. Our first plan is to move the code into code behinds per file, then refactor etc... were doing this to upgrade the client to the latest version of their cms At the moment we are going to have to manually copy and paste from hundreds of files, create a code behind, copy the code, add the namespaces based on the client-side imports and then do any tidying up does anybody PLEASE know of a tool that can do the majority of this work for us ? Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • [PHP] access array element by value

    - by Brandon
    array( [0] name => 'joe' size => 'large' [1] name => 'bill' size => 'small' ) I think i'm being thick, but to get the attributes of an array element if I know the value of one of the keys, I'm first looping through the elements to find the right one. foreach($array as $item){ if ($item['name'] == 'joe'){ #operations on $item } } I'm aware that this is probably very poor, but I am fairly new and am looking for a way to access this element directly by value. Or do I need the key? Thanks, Brandon

    Read the article

  • Image Viewer application, Image processing with Dispaly Data.

    - by Harsha
    Hello All, I am working on Image Viewer application and planning to build in WPF. My Image size are usually larger than 3000x3500. After searching for week, I got sample code from MSDN. But it is written in ATL COM. So I am planning to work and build the Image viewer as follows: After reading the Image I will scale down to my viewer size, viwer is around 1000x1000. Lets call this Image Data as Display Data. Once displaying this data, I will work only this Display data. For all Image processing operation, I will use this display data and when user choose to save the image, I will apply all the operation to original Image data. My question is, Is is ok to use Display data for showing and initial image processing operations.

    Read the article

  • Cloning java ArrayList and preventing it from modifications

    - by user222164
    I have a data structure like a database Rowset, which has got Rows and Rows have Columns. I need to initialize a Columns with null values, current code is to loop thru each column for a row and initialize values to NULL. Which is very inefficient if you have 100s or rows and 10s of column. So instead I am keeping a initialized ArrayList of columns are RowSet level, and then doing a clone of this Arraylist for individual rows, as I believe clone() is faster than looping thru each element. row.columnsValues = rowsset.NullArrayList.clone() Problem with this is NullArrayList can be accidentally modified after being cloned, thus sacrificing the integrity of ArrayList at RowSet level, to prevent I am doing 3 things 1) Delcaring ArrayList as final 2) Any elements I insert are final or null 3) Methods thurough this arrayList are passed to other arrays are declared a final. Sounds like a plan, do you see any holes ?

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • Twitter oauth_callback parameter being ignored!

    - by Astrofaes
    Hi guys, I'm trying to get Twitter authentication working on my ASP.NET site. When you create the app on the Twitter website, you have to specify a callback URL, which for sake of argument, I have set to http://mydomain.com I've read the oAuth 1.0a spec, and to override this callback URL with your own custom one you have to send the oauth_callback parameter in the request_token phase (url-encoded of course). So my request URL looks like this: http://twitter.com/oauth/request_token?oauth_callback_url=http%3A%2F%2Fmydomain.com%2Ftwittercallback Supposedly, if all goes to plan, in your response data, you are supposed to receive a new parameter of oauth_callback_confirmed=true in addition to your token and token secret parameters. However, my response comes through as: oauth_token=MYTOKEN&oauth_token_secret=MYTOKENSECRET I know I haven't given you guys the greatest amount to go on, but I'm at my wits end as to why I am not receiving the oauth_callback_confirmed parameter. Without this, my application keeps defaulting back to the callback URL hard-coded on the Twitter website. Please if anyone could help me out, I will be eternally grateful! Thanks, A.

    Read the article

  • Possible to view T-SQL syntax of a stored proc-based SqlCommand?

    - by mconnley
    Hello! I was wondering if anybody knows of a way to retrieve the actual T-SQL that is to be executed by a SqlCommand object (with a CommandType of StoredProcedure) before it executes... My scenario involves optionally saving DB operations to a file or MSMQ before the command is actually executed. My assumption is that if you create a SqlCommand like the following: Using oCommand As New SqlCommand("sp_Foo") oCommand.CommandType = CommandType.StoredProcedure oCommand.Parameters.Add(New SqlParameter("@Param1", "value1")) oCommand.ExecuteNonQuery() End Using It winds up executing some T-SQL like: EXEC sp_Foo @Param1 = 'value1' Is that assumption correct? If so, is it possible to retrieve that actual T-SQL somehow? My goal here is to get the parsing, etc. benefits of using the SqlCommand class since I'm going to be using it anyway. Is this possible? Am I going about this the wrong way? Thanks in advance for any input!

    Read the article

  • Restrict folder sharing over cygwin sshd

    - by mtanish
    I recently installed a SSH server on my Windows 7 PC and created a separate user account for this. When i logged in using SSH, i could access all the windows directories. /cygdrive/c /cygdrive/d /cygdrive/e How do i prevent this user from accessing all the win directories other than its home directory under cygwin /home/chuck/ ? Preferably i do not want the user to even view /cygdrive when the user types "mount". Is there a easy way to do this? I want to later allow remote users to log on to this machine and avoid messing up other things.I know i can setup a separate machine but this is a plan for later.

    Read the article

  • Best solution for reporting database

    - by zzyzx
    Here is the situation: There is a transaction intensive database - used for both routine transactions and reports. I was wondering if I could isolate these two operations and 2 independent databases, so reports could run off of one database and all the transactions could occur in another one. This would improve performance for the OLTP SQL database. I have gone over a few options like, Mirroring, Log shipping, Replication, Snapshots, Clustering - but would like to discuss the best possible strategy for the desired result. Please advise the best solution to implement this strategy, or any other thoughts/suggestion you may have.

    Read the article

  • Retrieving accessors in IronRuby

    - by rsteckly
    I'm trying to figure out how to retrieve the value stored in the Person class. The problem is just that after I define an instance of the Person class, I don't know how to retrieve it within the IronRuby code because the instance name is in the .NET part. /*class Person attr_accessor :name def initialize(strname) self.name=strname end end*/ //We start the DLR, in this case starting the Ruby version ScriptEngine engine = IronRuby.Ruby.CreateEngine(); ScriptScope scope = engine.ExecuteFile("c:\\Users\\ron\\RubymineProjects\\untitled\\person.rb"); //We get the class type object person = engine.Runtime.Globals.GetVariable("Person"); //We create an instance object marcy = engine.Operations.CreateInstance(person, "marcy");

    Read the article

  • What might cause ruby to lock up while exiting?

    - by user30997
    I have a ruby script that does a few perforce operations (through the scripting API) then simply ends: def foo() ... end def bar() ... end foo() bar() puts __LINE__ exit 0 #end of file ...and while the LINE will print out, the process never ends, whether the exit(0) is there or not. This is ruby 1.8.6, primarily on the mac, but I'm seeing this on the PC as well. I'm doing the usual google poking around, but hoped there might be a voice of experience here to bank on. Thanks.

    Read the article

  • Problem using PHP to open text file - blank spaces are removed

    - by Reg H
    Hi All, I'm trying to open and process ASCII files using PHP, but am having problems. The problem is that the blank spaces are removed, which I don't want to have happen, since the files are fixed width. The PHP script I used is this: $myFile = Test.SEG"; $file_handler = fopen ($myFile, r) or die ("Can't open SEG File."); while (!feof($file_handler)) { $dataline = fgets($file_handler); echo $dataline, ""; } I tried pasting samples of the original file in here, but the spaces were removed here as well! At this stage I'm just building the script in steps, getting one step working at a time, but this is as far as I've gotten. I plan to use substr() on '$dataline' to pick out the fields I need. Any suggestions on how to keep the spaces intact? Something tells me it's something to do with encoding, but I don't know for sure. Thanks!

    Read the article

  • What kinds of job scheduler framework or solution do you recomend on window server system

    - by Samuel.P
    Hi Guys My Company is running a lots of batch jobs to process data for partners. We used to use sql server agent to execute batch process. I found it's very difficult to get batch process information like log or status when i working on sql server's agent. So I'd like to change my company's job scheduling process to another stable solution But I dind't know which solutions to choose. I tried to find as same solution as CA Autosys. My Company use window server system. We shoud consider framework or solution should be configured using solution's api because i plan to make a asp.net web application to manage our job shceduler. What kinds of job scheduler Framework or inexpensive solution do you suggest? Regards, Park PS: I think Quartz Framework has a good reputation on J2EE system. But I am not sure Quartz.NET is as good as orginal java version.

    Read the article

  • Configuration manager for PHP

    - by Jack
    I am working on code re-factoring of configuration file loading part in PHP. Earlier I was using multiple 'ini' files but now I plan to go for single XML file which will be containing all configuration details of the project. Problem is, if somebody wants configuration file in ini or DB or anything else and not the default one (in this case XML), my code should handle that part. If somebody wants to go for other configuration option like ini, he will have to create ini file similar to my XML configuration file and my configuration manager should take care everything like parsing, storing in cache. For that I need a mechanism lets say proper interface for my configuration data where the underlying data store can be anything( XML, DB, ini etc) also I don't want it to be dependent on these underlying store and anytime in future this should be extensible to other file formats.

    Read the article

  • What are some good design patterns for CRUD?

    - by Extrakun
    I am working with a number of data entities which can be created, read, updated and deleted, and I find myself writing more or less the same code for them. For example, I need to sometimes output data as JSON, and sometimes in a table format. I am finding myself writing 2 different types of view to export the data to. Also, the creation of those entities within DB usually differs just by the SQL statements and the input parameters. I am thinking of creating a strategy pattern to represent different 'contexts'. For example, the read() method of an AJAX context will be to return the data as JSON. However, I wonder if others have deal with this problem beforehand and will like to know what design patterns are usually use for CRUD operations.

    Read the article

  • Why does Sql Server recommends creating an index when it already exist?

    - by Pierre-Alain Vigeant
    I ran a very basic query against one of our table and I noticed that the execution plan query processor is recommending that we create an index on a column The query is SELECT SUM(DATALENGTH(Data)) FROM Item WHERE Namespace = 'http://some_url/some_namespace/' After running, I get the following message // The Query Processor estimates that implementing the following index could improve the query cost by 96.7211%. CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[Item] ([Namespace]) My problem is that I already have such index on that column: CREATE NONCLUSTERED INDEX [IX_ItemNamespace] ON [dbo].[Item] ( [Namespace] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] Why is Sql Server recommending me to create such index when it already exist?

    Read the article

  • Extracting text from a file where date -time is the index

    - by Soham
    I have got around 800 files of maximum 55KB-100KB each where the data is in this format Date/Time/Float1/Float2/Float3/Float4/Integer Date is in DD/MM/YYYY format and Time is in the format of HH:MM Here the date ranges from say 1st May to 1June and each day, the Time varies from 09:00 to 15:30. I want to run a program so that, for each file, it extracts the data pertaining to a particular given date and writes to a file. I will not face any problem in writing into directory operations. I am trying to get around, to form a to do a search and extract operation. I dont know, how to do it, would like to have some idea. Thanks Soham

    Read the article

  • Artifical neural networks height-weight problem

    - by hammid1981
    i plan to use neurodotnet for my phd thesis, but before that i just want to build some small solutions to get used to the dll structure. the first problem that i want to model using backward propagation is height-weight ratio. I have some height and weight data, i want to train my NN so that if i put in some weight then i should get correct height as a output. i have 1 input 1 hidden and 1 output layer. Now here is first of many things i cant get around :) 1. my height data is in form of 1.422, 1.5422 ... etc and the corresponding weight data is 90 95, but the NN takes the input as 0/1 or -1/1 and given the output in the same range. how to address this problem

    Read the article

  • Need post-it notes that don't fall off the whiteboard after a week.

    - by jdv
    In my company, we plan our develpment work with scrum. We track progress using post-it stickies on a big whiteboard, and it works great. It is my understanding that's kind of standard. We are just one location, so we don't need or want to do this electronically. But to our (and the Q/A rep's) annoyance, the sticky notes begin to fall off the whiteboard after a week or two, or even sooner if you stick them on top of each other. I've experimented with extra tape on the stickies. That helped, but it also ruins the whiteboard. So I am looking for a pragmatic and preferably low-cost alternative. Are some post-it brands better than others? Or do you have another solution for a scrum board does not suffer from this?

    Read the article

  • Does SQL Server Compact Edition (SqlCe) have a SNAPSHOT table like Oracle Lite?

    - by MusiGenesis
    In Oracle Lite, you can create a SNAPSHOT table which is like a normal table except that it tracks changes to itself. The syntax is CREATE SNAPSHOT TABLE tblWhatever ... and you can perform CRUD operations on it like a normal table. To get the change information, you query the table like this: SELECT * FROM tblWhatever + WHERE ... which returns all the rows in the table (including deleted ones) meeting the WHERE clause, and you can access each row's row_state column as a normal field (which is invisible to a normal SELECT * FROM tblWhatever WHERE ... query). Is there some way to do the same thing with Sql Compact Edition (3.5) - i.e. create a table that tracks changes without using RDA?

    Read the article

  • Calling C function from DTrace scripts

    - by dmeister
    DTrace is impressive, powerful tracing system originally from Solaris, but it is ported to FreeBSD and Mac OSX. DTrace uses a high-level language called D not unlike AWK or C. Here is an example: io:::start /pid == $1/ { printf("file %s offset %d size %d block %llu\n", args[2]->fi_pathname, args[2]->fi_offset, args[0]->b_bcount, args[0]->b_blkno); } Using the command line sudo dtrace -q -s <name>.d <pid> all IOs originated from that process are logged. My question is if and how it is possible to call custom C functions from a DTrace script to do advanced operations with that tracing data during the tracing itself.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >