Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 432/903 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Symfony Basic API Http Authentication

    - by Daniel Hertz
    Can someone point me in the right direction in regards to making an api use basic http authentication? I am creating a restful api with symfony but would like to require users to be logged in to get certain data. I would also like many of these methods be dependent on the the username in the authentication process in order to get some of the data (using the username from the credentials to get all of a users friends) Thanks!

    Read the article

  • wait for the second VB script untill ended from primary VB script

    - by yael
    Hi I have VB script that run second VB script The second VB script ask some questions from the input box My problem is that “MyShell.Run” not wait until SecondVBscript.vbs will ended And the Other VB syntax run immodestly also Need to wait for MyShell.Run process ended and then perform the Other VB syntax How can I do that? Set MyShell = Wscript.CreateObject("WScript.Shell") MyShell.Run " C:\Program Files\SecondVBscript.vbs" Set MyShell = Nothing Other VB syntax

    Read the article

  • Android Debugging with Logcat and Emulator. Is it possible?

    - by DJTripleThreat
    This is pretty simple: I'm using NetBeans on Linux with Android emulator 1.6. I have Logcat on my android phone, but the process of getting the messages to somewhere readable isn't smooth at all. Can someone tell me how to get Logcat running on the emulator? Is there anything I can do to see debug messages other then having to copy the apk to my phone and testing it? Thanks in advance!

    Read the article

  • Releasing an OLE IStorage file handle in C#

    - by Bernard Darnton
    I'm trying to embed a PDF file into a Word document using the OLE technique described here: http://blogs.msdn.com/brian_jones/archive/2009/07/21/embedding-any-file-type-like-pdf-in-an-open-xml-file.aspx I've tried to implement to C++ code provided in C# so that the whole project's in one place and am almost there except for one roadblock. When I try to feed the generated OLE object binary data into the Word document I get an IOException. IOException: The process cannot access the file 'C:\Wherever\Whatever.pdf.bin' because it is being used by another process. There is a file handle open the .bin file and I don't know how to get rid of it. I don't know a huge amount about COM - I'm winging it here - and I don't know where the file handle is or how to release it. Here's what my C#-ised code looks like. What am I missing? public void ExportOleFile(string oleOutputFileName, string emfOutputFileName) { OLE32.IStorage storage; var result = OLE32.StgCreateStorageEx( oleOutputFileName, OLE32.STGM.STGM_READWRITE | OLE32.STGM.STGM_SHARE_EXCLUSIVE | OLE32.STGM.STGM_CREATE | OLE32.STGM.STGM_TRANSACTED, OLE32.STGFMT.STGFMT_DOCFILE, 0, IntPtr.Zero, IntPtr.Zero, ref OLE32.IID_IStorage, out storage ); var CLSID_NULL = Guid.Empty; OLE32.IOleObject pOle; result = OLE32.OleCreateFromFile( ref CLSID_NULL, _inputFileName, ref OLE32.IID_IOleObject, OLE32.OLERENDER.OLERENDER_NONE, IntPtr.Zero, null, storage, out pOle ); result = OLE32.OleRun(pOle); IntPtr unknownFromOle = Marshal.GetIUnknownForObject(pOle); IntPtr unknownForDataObj; Marshal.QueryInterface(unknownFromOle, ref OLE32.IID_IDataObject, out unknownForDataObj); var pdo = Marshal.GetObjectForIUnknown(unknownForDataObj) as IDataObject; var fetc = new FORMATETC(); fetc.cfFormat = (short)OLE32.CLIPFORMAT.CF_ENHMETAFILE; fetc.dwAspect = DVASPECT.DVASPECT_CONTENT; fetc.lindex = -1; fetc.ptd = IntPtr.Zero; fetc.tymed = TYMED.TYMED_ENHMF; var stgm = new STGMEDIUM(); stgm.unionmember = IntPtr.Zero; stgm.tymed = TYMED.TYMED_ENHMF; pdo.GetData(ref fetc, out stgm); var hemf = GDI32.CopyEnhMetaFile(stgm.unionmember, emfOutputFileName); storage.Commit((int)OLE32.STGC.STGC_DEFAULT); pOle.Close(0); GDI32.DeleteEnhMetaFile(stgm.unionmember); GDI32.DeleteEnhMetaFile(hemf); }

    Read the article

  • How can I change the user identity that runs a build agent in TeamCity?

    - by Chris Farmer
    I am trying to get a build process set up in TeamCity 5, and I am encountering an access denied error when trying to copy some files. I see that my build agent is running as "SYSTEM" now, and I think that's part of the problem. I'd like to change that user identity. The trouble is that I can't figure out how to change those settings on the build agent. How can I change the build user identity?

    Read the article

  • Easiest ways to generate graphs from Python?

    - by Noah Weiss
    I'm using Python to process CSV files filled with data that I want to run calculations on, and then graph. I'm looking for a library to use that I can send processed CSV information to, or a dict of some sort, and then choose different graphing styles with. Does anyone have any recommendations?

    Read the article

  • SharePoint MOSS Approve/Reject button customisation

    - by 78lro
    It seems that the out of the box approve/reject buttons in SharePoint CMS try to direct you to an InfoPath form. We have implemented custom .aspx pages that we use for our workflow approval process and would like it to show these when a user clicks the buttons. Currently we get an error that the formURN cannot be found. Can I affect the url that is shown when the approve/reject buttons are clicked or Can I hide those buttons and provide my own buttons/functionality for that purpose Thanks for any suggestions

    Read the article

  • SQL Server high CPU and I/O activity database tuning

    - by zapping
    Our application tends to be running very slow recently. On debugging and tracing found out that the process is showing high cpu cycles and SQL Server shows high I/O activity. Can you please guide as to how it can be optimised? The application is now about an year old and the database file sizes are not very big or anything. The database is set to auto shrink. Its running on win2003, SQL Server 2005 and the application is a web application coded in c# i.e vs2005

    Read the article

  • TFS Template Customization - SharePointPortal site

    - by Adam Jenkin
    I am customizing a Process Template for TFS2008. I am using the "MSF for Agile... v4.2" template as the base template and would like to set the version control settings of the "Project Management" document library todo the following: Major Versions : Enabled Documents must be checked our before they can be edited I'm using the editor GUI provided by the tfs powertools, however it does not appear these settings are available. Is it possible to define these settings in the WssTasks.XML file or do I need to approach this from a different angle.

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Load SQL dump in PostgreSQL without the password dependancy

    - by Cédric Girard
    Hi, I want my unit tests suite to load a SQL file in my database. I use a command like "C:\Program Files\PostgreSQL\8.3\bin"\psql --host 127.0.0.1 --dbname unitTests --file C:\ZendStd\www\voo4\trunk\resources\sql\base_test_projectx.pg.sql --username postgres 2>&1 It run fine in command line, but need me to have a pgpass.conf Since I need to run unit tests suite on each of development PC, and on development server I want to simplify the deployment process. Is there any command line wich include password? Thanks, Cédric

    Read the article

  • Automate TortoiseSVN commit using cruise control

    - by pratap
    hi all, i am new to tortoise svn, can any one tell how to automate tortoisesvn's commit process using cruisecontrol.net . My attempt to do that results in an exception... being thrown. My main concern is to auto close the window that pops up when we execute the command "tortoiseproc /command: commit /path:"**********PATH********* /logmsg: "log msg" /closeonend:1"

    Read the article

  • Why does Android Account & Sync reboot when trying to find my settings activity?

    - by mobibob
    I have an activity that I can declare as Launcher category and it launches just fine from the home screen. However, when I try to hook-up the same activity into my SyncAdapter's settings activity and open it from the Accounts & Sync page - MySyncAdapter - (touch account listing) it aborts with a system fatal error (reboots phone). Meanwhile, my SyncAdapter is working other respects. Here is the log at point of impact: 01-13 12:31:00.976 5024 5038 I ActivityManager: Starting activity: Intent { act=android.provider.Settings.ACTION_SYNC_SETTINGS flg=0x10000000 cmp=com.myapp.android.syncadapter.ui/SyncAdapterSettingsActivity.class (has extras) } 01-13 12:31:00.985 5024 5038 E AndroidRuntime: *** FATAL EXCEPTION IN SYSTEM PROCESS: android.server.ServerThread 01-13 12:31:00.985 5024 5038 E AndroidRuntime: android.content.ActivityNotFoundException: Unable to find explicit activity class {com.myapp.android.syncadapter.ui/SyncAdapterSettingsActivity.class}; have you declared this activity in your AndroidManifest.xml? 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:1404) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.app.Instrumentation.execStartActivity(Instrumentation.java:1378) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.app.ContextImpl.startActivity(ContextImpl.java:622) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.preference.Preference.performClick(Preference.java:828) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.preference.PreferenceScreen.onItemClick(PreferenceScreen.java:190) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.widget.AdapterView.performItemClick(AdapterView.java:284) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.widget.ListView.performItemClick(ListView.java:3382) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.widget.AbsListView$PerformClick.run(AbsListView.java:1696) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.os.Handler.handleCallback(Handler.java:587) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.os.Handler.dispatchMessage(Handler.java:92) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.os.Looper.loop(Looper.java:123) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at com.android.server.ServerThread.run(SystemServer.java:517) 01-13 12:31:00.985 5024 5038 I Process : Sending signal. PID: 5024 SIG: 9 01-13 12:31:01.005 5019 5019 I Zygote : Exit zygote because system server (5024) has terminated 01-13 12:31:01.015 1211 1211 E installd: eof Here is a snippet from my manifest file: <activity android:name="com.myapp.android.syncadapter.ui.SyncAdapterSettingsActivity" android:label="@string/title_settings" android:windowSoftInputMode="stateAlwaysHidden|adjustPan"> <intent-filter> <action android:name="android.intent.action.VIEW" /> <action android:name="android.intent.action.MAIN" /> <action android:name="android.provider.Settings.ACTION_SYNC_SETTINGS"/> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity>

    Read the article

  • Recommendation for screen video capture for demos

    - by Simon
    I am just in the process of making some little demo tutorial videos for my app to help users get up to speed. I have used Camtasia in the past but don't really like it. Anyone have any recommendations. Good is more important than free, but free certainly helps. I can use Windows or Mac for this job.

    Read the article

  • Apache2 php fastcgi, run for each request.

    - by SPnova
    I've installed php fastcgi for Apache2 by this doc: http://library.linode.com/web-servers/apache/php-cgi/ubuntu-9.10-karmic Pages is opening fine. But I don't understand why I can't see php5-cgi process [ top ] Looks like apache run php5-cgi for each request, but processes should be already run. Could you help me to find the problem?

    Read the article

  • jquery pagination

    - by user295189
    We are using jquery for pagination. We are pulling millions of records from the database and then th jquery does the pagination on the front end. that is a very slow process. Can someone advice us of a solution in php and jquery where we pull 50 records at a time? Thanks

    Read the article

  • Log files for group policy application deployment

    - by Cyril
    I'm looking into using group policy to deploy a couple of applications. I want to have the log of each installation written to a shared folder on a file server for tracking purposes. I can create the log if I pass the appropriate parameters. For example: msiexec /i Package.msi /l*vx c:\Package.log However using group policy for the deployment, you can't pass any parameters to the installation file. Is there anyway to specify the log file location in the process of creating the msi package?

    Read the article

  • Any Open Source Pregel like framework for distributed processing of large Graphs?

    - by Akshay Bhat
    Google has described a novel framework for distributed processing on Massive Graphs. http://portal.acm.org/citation.cfm?id=1582716.1582723 I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework? I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it. Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)

    Read the article

  • Faster Matrix Multiplication in C#

    - by Kyle Lahnakoski
    I have as small c# project that involves matrices. I am processing large amounts of data by splitting it into n-length chunks, treating the chucks as vectors, and multiplying by a Vandermonde** matrix. The problem is, depending on the conditions, the size of the chucks and corresponding Vandermonde** matrix can vary. I have a general solution which is easy to read, but way too slow: public byte[] addBlockRedundancy(byte[] data) { if (data.Length!=numGood) D.error("Expecting data to be just "+numGood+" bytes long"); aMatrix d=aMatrix.newColumnMatrix(this.mod, data); var r=vandermonde.multiplyBy(d); return r.ToByteArray(); }//method This can process about 1/4 megabytes per second on my i5 U470 @ 1.33GHz. I can make this faster by manually inlining the matrix multiplication: int o=0; int d=0; for (d=0; d<data.Length-numGood; d+=numGood) { for (int r=0; r<numGood+numRedundant; r++) { Byte value=0; for (int c=0; c<numGood; c++) { value=mod.Add(value, mod.Multiply(vandermonde.get(r, c), data[d+c])); }//for output[r][o]=value; }//for o++; }//for This can process about 1 meg a second. (Please note the "mod" is performing operations over GF(2^8) modulo my favorite irreducible polynomial.) I know this can get a lot faster: After all, the Vandermonde** matrix is mostly zeros. I should be able to make a routine, or find a routine, that can take my matrix and return a optimized method which will effectively multiply vectors by the given matrix, but faster. Then, when I give this routine a 5x5 Vandermonde matrix (the identity matrix), there is simply no arithmetic to perform, and the original data is just copied. ** Please note: What I use the term "Vandermonde", I actually mean an Identity matrix with some number of rows from the Vandermonde matrix appended (see comments). This matrix is wonderful because of all the zeros, and because if you remove enough rows (of your choosing) to make it square, it is an invertible matrix. And, of course, I would like to use this same routine to convert any one of those inverted matrices into an optimized series of instructions. How can I make this matrix multiplication faster? Thanks! (edited to correct my mistake with Vandermonde matrix)

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >