Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 344/1979 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • CUDA: How to reuse kernels in multiple files (for unit testing)

    - by zenna
    How can I go about reusing the same kernel without getting fatal linking errors due to defining the symbol multiple times In Visual Studio I get "fatal error LNK1169: one or more multiply defined symbols found" My current structure is as follows: Interface.h has an extern interface to a C function: myCfunction() (ala the C++ integration SDK example) Kernel.cu contains the actual __global__ kernels and is NOT included in the build: __global__ my_kernel() Wrapper.cu inlcudes Kernel.cu and Interface.h and calls my_kernel<<<...>>> This all works fine. But if I add another C function in another file which also includes Kernel.cu and uses those kernels, I get the errors. So how can I reuse the kernels in Kernel.cu among many C functions in different files. The purpose of this by the way is unit testing, and integrating my kernels with CPP unit, if there is no way to reuse kernels (there must be!) then other suggestions for unit testing kernels within my existing CPP unit framework would be appreciate. Thanks Zenna

    Read the article

  • CSS Files Don't Refresh with Wicket (Launched in Intellij via Start.java)

    - by Scanningcrew
    I have create a skeleton Wicket project using mvn archetype:create -DarchetypeGroupId=org.apache.wicket -DarchetypeArtifactId=wicket-archetype-quickstart -DarchetypeVersion=1.4-rc4 -DgroupId=com.mycompany -DartifactId=myproject All the configuration/importing new project with Maven/Intellij worked fine. I proceeded to add a basic CSS file to my start page, per the following recommended way of doing it in Wicket 1.4 public class BasePage extends WebPage { public BasePage() { add(CSSPackageResource.getHeaderContribution(BasePage.class, "main.css")); } } The main.css file has been put along side BasePage.java and BasePage.html in /src/main/java. I launch the application with Start.java. The problem is when I make changes to the CSS file it is not being picked up when I relaunch Start.java. (Changes to the java and html files are being updated when I change them) I made sure the browser cache was being cleared, and even valided the request/response in Firfox/Firebug. It seems like somewhere between Wicket's magic and the jetty instance Start.java creates the CSS file is being cached and not updated. Any ideas?

    Read the article

  • Eclipse c++ with mingw comiler cant build boost regex example, can find .a library files

    - by Kim
    Hi, I'm trying to build the boost regex example in eclipse using mingw on vista. I built boost ok with mingw as there are library files XXXX.a. I could build/compile the first boost example that doesnt require any of the compiled boost libraries. When I compile the regex example I get a linker error saying it cant find the library file. I have tried various libray file names eg leave off the .a extension, leave off the lib prefix etc. Now the interesting thing is that if I leave off the library extension and rename the library file to XXX.lib it works and runs ok. So why cant it read the .a library file? It must be my setup somewhere but I dont know where or what to set. From what I read everyone is ok linking the .a file except me :( Thanks in advance, Kim

    Read the article

  • WebSharingAppDemo-CEProviderEndToEnd Queries peerProvider for NeedsScope before any files are batche

    - by Don
    I'm building an application based on the WebSharingAppDemo-CEProviderEndToEnd. When I deploy the server portion on a server, the code gives the error "The path is not valid. Check the directory for the database." during the call to NeedsScope() in the CeWebSyncService.cs file. Obviously the server can't access the client's sdf but what is supposed to happen to make this work? The app uses batching to send the data and the batches have to be marshalled across to the temp directory but this problem is occurring before any files have been batched over. There is nothing for the server to look at to determine whether the peerProivider needs scope. What am I missing? public bool NeedsScope() { Log("NeedsSchema: {0}", this.peerProvider.Connection.ConnectionString); SqlCeSyncScopeProvisioning prov = new SqlCeSyncScopeProvisioning(); return !prov.ScopeExists(this.peerProvider.ScopeName, (SqlCeConnection)this.peerProvider.Connection); }

    Read the article

  • Indy FTP, large files and NAT routers

    - by Lobuno
    Hello! I have been using Indy to transfers files via FTP for years now but have not been able to find a satisfactory solution for the following problem. When a user is uploading a large file, behind a router, sometimes the following happens: the file is uploaded OK, but under the mean time the command channel gets disconnected because of a timeout. Normally this doesn't happens with a direct connection to the server, because the server "knows" that a transfer is being taking place on the data channel. Some routers are not aware of this, though and the command channel is closed. Many programs send a NOOP command periodically to keep the command channel alive even if this is not part of the standard FTP specification. My question: how do I do that? Do I send the NOOP command in the OnWork event? Does this cause any collateral damage in some way, like, do I need to process some response? How do I best solve this problem?

    Read the article

  • MongoDB, Carrierwave, GridFS and prevention of files' duplication

    - by Arkan
    I am dealing with Mongoid, carrierwave and gridFS to store my uploads. For example, I have a model Article, containing a file upload(a picture). class Article include Mongoid::Document field :title, :type => String field :content, :type => String mount_uploader :asset, AssetUploader end But I would like to only store the file once, in the case where I'll upload many times the same file for differents articles. I saw GridFS has a MD5 checksum. What would be the best way to prevent duplication of identicals files ? Thanks

    Read the article

  • IntelliSense and Folding Editor Not Working in Visual Studio 2008 SP1 for Certain Files Only

    - by cplotts
    Ok, I have an issue that is driving me nuts. In certain xaml files only, neither IntelliSense nor the folding editor is working. I have noticed that if I delete the local namespace and add it back, the folding editor starts working. If I delete the local namespace and don't add it back, IntelliSense starts working as well. Of course, I need to remember to add that namespace declaration back before I compile and/or check in ... which is annoying. How can you fix this?

    Read the article

  • Collection of MVC CSS files available?

    - by Jaxidian
    Just curious - are there various customized Site.css files (and accompanying images) that work with the default ASP.NET MVC 2 templates? I'm a stereotypical developer who "doesn't do pretty" so I'd like to find a design that is good enough for me to use until I later have a designer come back and fix my design. Are there collections/libraries of various designs out there that work with the default templates? I did find this but the 2 popular ones I tried seem like they're for MVC 1, plus they in no way used the default tags with the MVC 2 templates.

    Read the article

  • Problem copying files through xcopy using VBScript

    - by sushant
    I am using VBScript to copy files using xcopy. The problem is that the folder path has to be entered by the user. Assuming I put that path in a variable, say h, how do I use this variable in the xcopy command? Here is the code I tried: Dim WshShell, oExec, g, h h = "D:\newfolder" g = "xcopy $h D:\y\ /E" Set WshShell = CreateObject("WScript.Shell") Set oExec = WshShell.Exec(g) I also tried &h but it did not work. Could anyone help me work out the correct syntax? Any help is appreciated.

    Read the article

  • File Upload multiple files in asp.net (similar to gmail)

    - by superstar
    Hi Guys, I need suggestions with regards to the multiple file upload using File Upload control in asp.net(along with C#). I have a File Upload Control, so i click the 'Browse' button and when i select a file from the select file dialog, i want the file to be shown as a link below the File Upload Control( somewhat similar to gmail). This file should be seen such a way that it can be deleted, if i wanted to. And also i should be able to upload another file from the File Upload control. All these files should be uploaded to a location when i use a button click event in the end. I think i have made myself clear. Any Suggestions are really helpful. Thanks.

    Read the article

  • RichFaces rich:insert takes a long time to output large files

    - by Mark Lewis
    Hello I'm using a RichFaces <rich:insert like this: <rich:panel header="my head"> <a4j:outputPanel ajaxRendered="true"> <rich:insert src="#{MyBacking.myPath}" highlight="groovy" /> </a4j:outputPanel> </rich:panel> If I have a 60k file to output, it takes 23 seconds. I've got a requirement to output the contents of some larger files than that and obviously the larger the file, the larger the wait for content. The recommendation in the answer to another related question is to introduce paging. I will, but the question is, why does it take so long to output 60k of text using JSF/RichFaces? That is, reading off a local disk with Windows XP SP2 PC - I can see from the log the data has already been written to disk from the network. Other scripting languages appear to be faster than this - is it something to do with the JSF lifecycle having to handle the text maybe? Thanks

    Read the article

  • Showing HTML comment strings (<!-- -->) in HTML files

    - by Andrei
    Hello all. I'm building a source code search engine, and I'm returning the results on a HTML page (aspx to be exact, but the view logic is in HTML). When someone searches a string, I also return the whole line of code where this string can be found in a file. However, some lines of code come from HTML/aspx files and these lines contain HTML specific comments (). When I try to print this line on the HTML page, it interprets it as a comment and does not show it on the screen....how should I go about solving this so that it actually shows up? Any help would be welcomed. Thanks. edit: err...i see now that firebug could help me with this: <!-- -->

    Read the article

  • Transforming TT files in MsBuild

    - by Phill Duffy
    I need to build a DSL Solution using MsBuild and want to be able to transform the TT files, I have tried the guide on http://msdn.microsoft.com/en-us/library/ee847423(VS.100).aspx but I am getting the following errors: Failed to resolve include text for file:{0} and also Loading the include file '{0}' returned a null or empty string. There is a page on MSDN which has these issues and there resolutions : http://msdn.microsoft.com/en-us/library/bb126242(VS.100).aspx but don't really give me enough information to resolve the issue. One thing to note in the error it has the following path: Error 72 Failed to resolve include text for file:C:\source\XXXXXXXX\Dsl\GeneratedCode\Dsl\ToolboxHelper.tt. Line=-1, Column=-1 Dsl but the location of the actual TT file is C:\source\XXXXXXXX\Dsl\GeneratedCode\ToolboxHelper.tt

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • kernelHow to read/write files within kernel module?

    - by Methos
    I know all the discussions about why one should not read/write files from kernel, instead how to use /proc or netlink to do that. I want to read/write anyway. I have also read http://www.linuxjournal.com/article/8110 However, problem is 2.6.30 does not export sys_read(). Rather its wrapped in SYSCALL_DEFINE3. So if I use that in my module, I get following warnings: WARNING: "sys_read" [xxx.ko] undefined! WARNING: "sys_open" [xxx.ko] undefined! Obviously insmod cannot load the module because linking does not happen correctly. Questions: How to read/write within kernel after 2.6.22 (where sys_read()/sys_open() are not exported)? In general, how to use system calls wrapped in macro SYSCALL_DEFINEn() from within the kernel?

    Read the article

  • loading multiple .mat files in MATLAB

    - by smilingbuddha
    I have 110 files named time1.mat, time2.mat ..., time110.mat. I want to load these matrices into the MATLAB workspace. I have always used load -'ASCII' matrix.mat to load an ASCII matrix file in the current folder. So I tried doing for i=1:10 filename=strcat('time',int2str(i),'.mat'); load -'ASCII' filename end But I am getting a MATLAB error as ??? Error using ==> load Unable to read file filename: No such file or directory. ? Of course the string filename seems to be evaluated correctly by MATLAB as time1.mat. in the first iteration where it crashes at the load line. Any suggestions how I should do this?

    Read the article

  • SSIS For Each File Loop and File System Task to copy Files

    - by Marlon
    I'm using a files system task inside a for each loop container, just as described here: link text However, when I execute the package I get this error: [File System Task] Error: An error occurred with the following error message: "The process cannot access the file 'C:\Book1.xlsx' because it is being used by another process.". I do not have the file open, and I assume no one else does, as I am able to copy, and open, and overwrite the file. Any suggestions would be appreciated. If you want an example package plz let me know.

    Read the article

  • Copying files from a Rails plugin into the application upon plugin install

    - by Lou Z.
    When someone installs this plugin, I would like a file to be copied into the config/initializers directory of the app. I could do this in install.rb by copying a template file that resides somewhere in the plugin. Another option would be to require the user to run a generator after install. I know rspec-rails makes you run a generator after you install it, is that the recommended behavior? And is there anything wrong with copying files into the application in install.rb? Thanks! Lou

    Read the article

  • How to use MSBuild to create ClickOnce files that match those created by Visual Studio

    - by EasyTimer
    I am trying to use MSBuild in a TFS Build file to create click once files for my application. According to MSDN (http://msdn.microsoft.com/en-us/library/ms165431.aspx) it seems that the project file's click once settings should be used unless you override them with property arguments on the command line. However, even though I have specified (in VS) that some prerequisites should be bootstrapped in with the setup, they don't seem to get installed if I use the MSBuild command line to create the package, even though they do seem to get installed if I use Visual Studio to create the ClickOnce package. Please could someone advise me how to get my prerequisites to get installed via ClickOnce when the clickonce package is built using the MSBuild command line? For information, in Visual Studio, project properties, "Publish" tab, "Prerequisite" button, I have ticked "Create setup program to install prerequisite components", I have ticked my pre-requisites and I have specified "Download prerequisites from the component vendor's web site"

    Read the article

  • how to create thumbnail system for MP4 files

    - by Pete Herbert Penito
    Hi everyone! I know I know, why am I using MP4 still?? It's because I have like 100 files already in this format and I need to upload to a website, I have the mp4 file embeded in the site already and the file played changes according to php. but what I really need is a way to dynamically create a thumbnail or take a snapshot of the video file to display on the page. I've read a couple things online but they all require the file type to be in FLV, what would be the best way to accomplish this? Thank you Guys!

    Read the article

  • SQL Server schema comparison - leaves behind obsolete files on synchronization

    - by b0x0rz
    Directly related to http://stackoverflow.com/questions/2768489/visual-studio-2010-database-project-is-there-a-visual-way/2772205#2772205 I have a problem synchronizing between database project and a database in Visual Studio. Usually I synchronize FROM database TO database project (using the Visual Studio data scheme compare new schema comparison). The synchronization works, BUT when I for example corrected the spelling of a key in the database and synchronized it - the file with the WRONG spelling of a key remains (albeit is commented out inside). the new one is correctly added. This file happens to be in: [project name]/Scheme Objects/Schemas/dbo/Tables/Keys But for sure there are others elsewhere. how to automatically remove such obsolete files when synchronizing? thnx

    Read the article

  • Creating multiple log files of different content with log4j

    - by Daniel
    Is there a way to configure log4j so that it outputs different levels of logging to different appenders? I'm trying to set up multiple log files. The main log file would catch all INFO and above messages for all classes. (In development, it would catch all DEBUG and above messages, and TRACE for specific classes.) Then, I would like to have a separate log file. That log file would catch all DEBUG messages for a specific subset of classes, and ignore all messages for any other class. Is there a way to get what I'm after? Thanks, Dan

    Read the article

  • Using ed to manipulate files matched by find

    - by TheOsp
    Following the bash-hackers wiki's recommendation, I want to edit files using ed. In particular I want to do the following with ed instead of sed: find . -type f -exec sed -i -e 's/a/b/g' {} \; I see that ed doesn't have opt like sed's -e, so as far as I know, pipes and io redirections are the only way to work with it non-interactively. So, using ed from a bash script to do the same as the above sed command would look like: ed file_name <<<$'g/a/s//b/g\nw' Or echo $'g/a/s//b/g\nw' | ed file_name But as far as I know it is impossible to involve pipes or io redirections within find's -exec. Do I miss something? or is the only way to overcome this is to use loops? for file in $(find . -type f -print); do ed $file <<<$'g/a/s//b/g\nw'; done;

    Read the article

  • Manipulating multi-track ogg files programatically

    - by Chad Birch
    I'm planning to create a program for manipulating multi-track OGG files, but I don't have any experience with the relevant libraries, so I'm looking for recommendations about which language/library to use for this. I don't really have any preference for the language, I'll happily code it in C, C#, Python, whatever makes things the easiest (or even possible). Perhaps it's even a possibility to automate Audacity somehow? In terms of requirements, I'm not looking for anything particularly fancy. It will probably be a command-line program, I don't need to be able to play the audio, draw image representations of the waveforms, etc. The program will basically be used as a converter, but I need to do some processing before outputting. That is, I need the ability to programatically remove some tracks, set panning per-track, change track volumes, etc. Nothing too complex, just some basic processing, and then output the result in either MP3 or a format easily converted to MP3, such as WAV. Any suggestions or general information would be appreciated, thanks.

    Read the article

  • Options for Linux OS executable archive files - self installers

    - by Matt1776
    I am looking to create a web-project that is able to install with a program. The user should be able to download an archive file or tar file, run it (executable), and the setup script would ask for paths and configurable values and then unpack its 'payload' and sorting out the contents for deployment. This would be a Linux version of the MSI installer. Is there such a thing for Linux operating systems? This does not involve kernel level manipulations. All it needs to do is copy directories and files on the filesystem, which should cover about 80% if not more of all the *nix distributions.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >