Search Results

Search found 3339 results on 134 pages for 'frequency analysis'.

Page 45/134 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Managed language for scientific computing software

    - by heisen
    Scientific computing is algorithm intensive and can also be data intensive. It often needs to use a lot of memory to run analysis and release it before continuing with the next. Sometime it also uses memory pool to recycle memory for each analysis. Managed language is interesting here because it can allow the developer to concentrate on the application logic. Since it might need to deal with huge dataset, performance is important too. But how can we control memory and performance with managed language?

    Read the article

  • <100% Test coverage - best practices in selecting test areas

    - by Paul Nathan
    Suppose you're working on a project and the time/money budget does not allow 100% coverage of all code/paths. It then follows that some critical subset of your code needs to be tested. Clearly a 'gut-check' approach can be used to test the system, where intuition and manual analysis can produce some sort of test coverage that will be 'ok'. However, I'm presuming that there are best practices/approaches/processes that identify critical elements up to some threshold and let you focus your test elements on those blocks. For example, one popular process for identifying failures in manufacturing is Failure Mode and Effects Analysis. I'm looking for a process(es) to identify critical testing blocks in software.

    Read the article

  • The best way to organize WPF projects

    - by Mike
    Hello everybody, I've started recently to develop a new software in WPF, and I still don't know which is the best way to organize the application, to be more productive with Visual Studio and Expression Blend. I noticed 2 annoying things I'd like to solve: I'm using Code Contracts in my projects, and when I run my project with Expression Blend, it launches the static analysis of the code. How can I stop that? Which configuration of the project does Blend use by default? I've tried to disable Code Contracts in a new configuration. It works in VS as the static analysis is not launched, but it has no effects in Blend. I've thinked about splitting the Windows Application in 2 parts: the first one containing the views of the WPF (app.exe) and the second one being the core of the project, with the logic code (app.core.dll), and I would just open the former project in Blend. Any thoughts about that? Thanks in advance Mike

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Dynamic function docstring

    - by Tom Aldcroft
    I'd like to write a python function that has a dynamically created docstring. In essence for a function func() I want func.__doc__ to be a descriptor that calls a custom __get__ function create the docstring on request. Then help(func) should return the dynamically generated docstring. The context here is to write a python package wrapping a large number of command line tools in an existing analysis package. Each tool becomes a similarly named module function (created via function factory and inserted into the module namespace), with the function documentation and interface arguments dynamically generated via the analysis package.

    Read the article

  • Autocorrelation returns random results with mic input (using a high pass filter)

    - by Niall
    Hello, Sorry to ask a similar question to the one i asked before (FFT Problem (Returns random results)), but i've looked up pitch detection and autocorrelation and have found some code for pitch detection using autocorrelation. Im trying to do pitch detection of a users singing. Problem is, it keeps returning random results. I've got some code from http://code.google.com/p/yaalp/ which i've converted to C++ and modified (below). My sample rate is 2048, and data size is 1024. I'm detecting pitch of both a sine wave and mic input. The frequency of the sine wave is 726.0, and its detecting it to be 722.950820 (which im ok with), but its detecting the pitch of the mic as a random number from around 100 to around 1050. I'm now using a High pass filter to remove the DC offset, but it's not working. Am i doing it right, and if so, what else can i do to fix it? Any help would be greatly appreciated! double* doHighPassFilter(short *buffer) { // Do FFT: int bufferLength = 1024; float *real = malloc(bufferLength*sizeof(float)); float *real2 = malloc(bufferLength*sizeof(float)); for(int x=0;x<bufferLength;x++) { real[x] = buffer[x]; } fft(real, bufferLength); for(int x=0;x<bufferLength;x+=2) { real2[x] = real[x]; } for (int i=0; i < 30; i++) //Set freqs lower than 30hz to zero to attenuate the low frequencies real2[i] = 0; // Do inverse FFT: inversefft(real2,bufferLength); double* real3 = (double*)real2; return real3; } double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; } // These work by shifting the signal until it seems to correlate with itself. // In other words if the signal looks very similar to (signal shifted 200 data) than the fundamental period is probably 200 data // Note that the algorithm only works well when there's only one prominent fundamental. // This could be optimized by looking at the rate of change to determine a maximum without testing all periods. double detectPitchCalculation(double* data, double minHz, double maxHz, int nCandidates, int nResolution) { //-------------------------1-------------------------// // note that higher frequency means lower period int nLowPeriodInSamples = hzToPeriodInSamples(maxHz, 2048); int nHiPeriodInSamples = hzToPeriodInSamples(minHz, 2048); if (nHiPeriodInSamples <= nLowPeriodInSamples) printf("Bad range for pitch detection."); if (1024 < nHiPeriodInSamples) printf("Not enough data."); double *results = new double[nHiPeriodInSamples - nLowPeriodInSamples]; //-------------------------2-------------------------// for (int period = nLowPeriodInSamples; period < nHiPeriodInSamples; period += nResolution) { double sum = 0; // for each sample, find correlation. (If they are far apart, small) for (int i = 0; i < 1024 - period; i++) sum += data[i] * data[i + period]; double mean = sum / 1024.0; results[period - nLowPeriodInSamples] = mean; } //-------------------------3-------------------------// // find the best indices int *bestIndices = findBestCandidates(nCandidates, results, nHiPeriodInSamples - nLowPeriodInSamples - 1); //note findBestCandidates modifies parameter // convert back to Hz double *res = new double[nCandidates]; for (int i=0; i < nCandidates;i++) res[i] = periodInSamplesToHz(bestIndices[i]+nLowPeriodInSamples, 2048); double pitch2 = res[0]; free(res); free(results); return pitch2; } /// Finds n "best" values from an array. Returns the indices of the best parts. /// (One way to do this would be to sort the array, but that could take too long. /// Warning: Changes the contents of the array!!! Do not use result array afterwards. int* findBestCandidates(int n, double* inputs,int length) { //int length = inputs.Length; if (length < n) printf("Length of inputs is not long enough."); int *res = new int[n]; double minValue = 0; for (int c = 0; c < n; c++) { // find the highest. double fBestValue = minValue; int nBestIndex = -1; for (int i = 0; i < length; i++) { if (inputs[i] > fBestValue) { nBestIndex = i; fBestValue = inputs[i]; } } // record this highest value res[c] = nBestIndex; // now blank out that index. if(nBestIndex!=-1) inputs[nBestIndex] = minValue; } return res; } int hzToPeriodInSamples(double hz, int sampleRate) { return (int)(1 / (hz / (double)sampleRate)); } double periodInSamplesToHz(int period, int sampleRate) { return 1 / (period / (double)sampleRate); } Thanks, Niall. Edit: Changed the code to implement a high pass filter with a cutoff of 30hz (from What Are High-Pass and Low-Pass Filters?, can anyone tell me how to convert the low-pass filter using convolution to a high-pass one?) but it's still returning random results. Plugging it into a VST host and using VST plugins to compare spectrums isn't an option to me unfortunately.

    Read the article

  • Finding C++ static initialization order problems

    - by Fred Larson
    We've run into some problems with the static initialization order fiasco, and I'm looking for ways to comb through a whole lot of code to find possible occurrences. Any suggestions on how to do this efficiently? Edit: I'm getting some good answers on how to SOLVE the static initialization order problem, but that's not really my question. I'd like to know how to FIND objects that are subject to this problem. Evan's answer seems to be the best so far in this regard; I don't think we can use valgrind, but we may have memory analysis tools that could perform a similar function. That would catch problems only where the initialization order is wrong for a given build, and the order can change with each build. Perhaps there's a static analysis tool that would catch this. Our platform is IBM XLC/C++ compiler running on AIX.

    Read the article

  • Can I ask Postgresql to ignore errors within a transaction

    - by fmark
    I use Postgresql with the PostGIS extensions for ad-hoc spatial analysis. I generally construct and issue SQL queries by hand from within psql. I always wrap an analysis session within a transaction, so if I issue a destructive query I can roll it back. However, when I issue a query that contains an error, it cancels the transaction. Any further queries elicit the following warning: ERROR: current transaction is aborted, commands ignored until end of transaction block Is there a way I can turn this behaviour off? It is tiresome to rollback the transaction and rerun previous queries every time I make a typo.

    Read the article

  • Simple database design and LINQ

    - by Anders Svensson
    I have very little experience designing databases, and now I want to create a very simple database that does the same thing I have previously had in xml. Here's the xml: <services> <service type="writing"> <small>125</small> <medium>100</medium> <large>60</large> <xlarge>30</xlarge> </service> <service type="analysis"> <small>56</small> <medium>104</medium> <large>200</large> <xlarge>250</xlarge> </service> </services> Now, I wanted to create the same thing in a SQL database, and started doing this ( hope this formats ok, but you'll get the gist, four columns and two rows): > ServiceType Small Medium Large > > Writing 125 100 60 > > Analysis 56 104 200 This didn't work too well, since I then wanted to use LINQ to select, say, the Large value for Writing (60). But I couldn't use LINQ for this (as far as I know) and use a variable for the size (see parameters in the method below). I could only do that if I had a column like "Size" where Small, Medium, and Large would be the values. But that doesn't feel right either, because then I would get several rows with ServiceType = Writing (3 in this case, one for each size), and the same for Analysis. And if I were to add more servicetypes I would have to do the same. Simply repetitive... Is there any smart way to do this using relationships or something? Using the second design above (although not good), I could use the following LINQ to select a value with parameters sent to the method: protected int GetHourRateDB(string serviceType, Size size) { CalculatorLinqDataContext context = new CalculatorLinqDataContext(); var data = (from calculatorData in context.CalculatorDatas where calculatorData.Service == serviceType && calculatorData.Size == size.ToString() select calculatorData).Single(); return data.Hours; } But if there is another better design, could you please also describe how to do the same selection using LINQ with that design? Please keep in mind that I am a rookie at database design, so please be as explicit and pedagogical as possible :-) Thanks! Anders

    Read the article

  • How do you measure latency in low-latency environments?

    - by Ajaxx
    Here's the setup... Your system is receiving a stream of data that contains discrete messages (usually between 32-128 bytes per message). As part of your processing pipeline, each message passes through two physically separate applications which exchange the data using a low-latency approach (such as messaging over UDP) or RDMA and finally to a client via the same mechanism. Assuming you can inject yourself at any level, including wire protocol analysis, what tools and/or techniques would you use to measure the latency of your system. As part of this, I'm assuming that every message that is delivered to the system results in a corresponding (though not equivalent) message being pushed through the system and delivered to the client. The only tool that I've seen on the market like this is TS-Associates TipOff. I'm sure that with the right access you could probably measure the same information using a wire analysis tool (ala wireshark) and the right dissectors, but is this the right approach or are there any commodity solutions that I can use?

    Read the article

  • GIS: When and why to use ArcObjects over GDAL programming to work with ArcGIS rasters and vectors?

    - by anotherobject
    Im just starting off with GDAL + python to support operations that cannot be done with ArcGIS python geoprocessing scripting. Mainly I am doing spatial modeling/analysis/editing of raster and vector data. I am a bit confused when ArcObject development is required versus when GDAL can be used? Is there functionality of ArcObjects that GDAL does not do? Is the opposite true too? I am assuming that ArcObjects are more useful in developing online tools versus Desktop analysis and modeling where the difference is more to do with preference? In my case i prefer GDAL because of python support, which I believe ArcObjects lack. thanks!

    Read the article

  • Data Mining open source tools

    - by Andriyev
    Hi I'm due to take up a project which is into data mining. Before I jump in I wanted to probe around for different data mining tools (preferably open source) which allows web based reporting. In my scenario the all the data would be provided to me, so I'm not supposed to crawl for it. In n nutshell, am looking for a tool which does - Data Analysis, Web based Reporting, provides some kind of a dashboard and mining features. I have worked on the Microsoft Analysis Services and BOXI and off late I have been looking at Pentaho, which seems to be a good option. Please share your experiences on any such tool which you know of. cheers

    Read the article

  • Creating a new buffer with text using EmacsClient

    - by User1
    I have a program that can send text to any other program for further analysis (eg sed, grep, etc). I would like it to send the data to Emacs and do analysis there. How would I do that? EmacsClient takes a filename by default, this is a data string not a file and I really don't want to create and delete files just to send data to Emacs. EmacsClient has an "eval" command-line option that let's you execute lisp code instead of open files. Is there a simple lisp function that will open a new buffer with the given text?

    Read the article

  • Is there a Java Descriptor like thing in .Net?

    - by Sun Liwen
    I'm working on a static analysis tool for .NET assembly. In Java, there is a Descriptor which can be used to represent method or field in a string with specified grammar. for field: double d[][][]; will be [[[D It's useful especially when doing bytecode analysis. Coz it's easy to describe. If there a similar thing in .NET CLR? Or is there a better way to achieve this? Thanks!

    Read the article

  • multiple move operations and data processes in work thread

    - by younevertell
    main thread-- start workthread--StartStage(get list of positions for data process) -- move to one position -- data sampling*strong text*-- data collection--data analysis------data sampling*strong text* basically, work thread does the data sampling*strong text*-- data collection--data analysis------data sampling*strong text* loop for one positioin until press stop or target is obtained. my questions: After work thread finishs the loop for one positioin, it would end itself. now how to make the work thread moves to the next position to do the data process loop after work thread finish one position work, would not end itself until data process for all the positions are done? Thanks in advance!

    Read the article

  • Running [R] on a Netbook

    - by Thomas
    I am interested in purchasing a netbook to do field research in another country. My hardware specifications for the nebtook are fairly basic: Be rugged enough to survive a bit of wear and tear Fairly fast processing (the ability to upgrade from 1GB of RAM to 2GB) A battery life of longer than 6 hours At least a 10 inch screen A decent camera for Skyping However, I am mainly concerned about being able to do basic statistical analysis in conjunction with R Be able run a Spreadsheet program to do basic data input (like Excel or Open Office) Use R to do basic data analysis (Regression, some simulation (nothing crazy), data cleaning, and some of the functionality) Word Processing (Word or Open Office) Do you have any suggestions on which models or brands my fit my needs? Some of the models I am considering: Samsung NB-30 Toshiba NB 305 Asus Eee PC 1005HA Lenovo S10-2 Does anyone use R on a netbook, and if so do you have any recommendations on how best to optimize it? This article from Lifehacker mentions some OS. Anybody use these in conjunction with R? Any help would be much appreciated.

    Read the article

  • Determining whether a class implements a generic list in a T4 template

    - by James Hollingworth
    I'm writing a T4 template which loads some classes from an assembly, does some analysis of the classes and then generates some code. One particular bit of analysis I need to do is to determine whether the class implements a generic list. I can do this pretty simply in C#, e.g. public class Foo : List<string> { } var t = typeof(Foo); if (t.BaseType != null && t.BaseType.IsGenericType && t.BaseType.GetGenericTypeDefinition() == typeof(List<>))) Console.WriteLine("Win"); However T4 templates use the FXCop introspection engine and so you do not have access to the .net reflection API. I've spent the past couple of hours in Reflector but still can't figure it out. Does anyone have any clues about how to do this?

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • CodePlex Daily Summary for Tuesday, September 11, 2012

    CodePlex Daily Summary for Tuesday, September 11, 2012Popular ReleasesMetodología General Ajustada - MGA: 03.01.04: Cambios Parmenio: Envio actualizaciones al formato PRO_F02, costos y fuentes. Se debe ejecutar el script en la base de datos. Soluciona problema que al tener varias alternativas, traiga en el formulario PR-02 los costos de la otra alternativa. Cambios John: Integración de código con cambios enviados por Parmenio Bonilla. Generación de instaladores. Soporte técnico por correo electrónico y telefónico. Restauración y recuperación de dos proycetos de Bakcup de Base de Datos de Chocó.Arduino for Visual Studio: Arduino 1.x for Visual Studio 2012, 2010 and 2008: Register for the visualmicro.com forum for more news and updates Version 1209.10 includes support for VS2012 and minor fixes for the Arduino debugger beta test team. Version 1208.19 is considered stable for visual studio 2010 and 2008. If you are upgrading from an older release of Visual Micro and encounter a problem then uninstall "Visual Micro for Arduino" using "Control Panel>Add and Remove Programs" and then run the install again. Key Features of 1209.10 Support for Visual Studio 2...Microsoft Script Explorer for Windows PowerShell: Script Explorer Reference Implementation(s): This download contains Source Code and Documentation for Script Explorer DB Reference Implementation. You can create your own provider and use it in Script Explorer. Refer to the documentation for more information. The source code is provided "as is" without any warranty. Read the Readme.txt file in the SourceCode.Social Network Importer for NodeXL: SocialNetImporter(v.1.5): This new version includes: - Fixed the "resource limit" bug caused by Facebook - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuAcDown????? - AcDown Downloader Framework: AcDown????? v4.1: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??...Move Mouse: Move Mouse 2.5.2: FIXED - Minor fixes and improvements.MVC Controls Toolkit: Mvc Controls Toolkit 2.3: Added The new release is compatible with Mvc4 RTM. Support for handling Time Zones in dates. Specifically added helper methods to convert to UTC or local time all DateTimes contained in a model received by a controller, and helper methods to handle date only fileds. This together with a detailed documentation on how TimeZones are handled in all situations by the Asp.net Mvc framework, will contribute to mitigate the nightmare of dates and timezones. Multiple Templates, and more options to...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.00: Maintenance Release Changes on Metro7 06.02.00 Fixed width and height on the jQuery popup for the Editor. Navigation Provider changed to DDR menu Added menu files and scripts Changed skins to Doctype HTML Changed manifest to dnn6 manifest file Changed License to HTML view Fixed issue on Metro7/PinkTitle.ascx with double registering of the Actions Changed source folder structure and start folder, so the project works with the default DNN structure on developing Added VS 20...Mishra Reader: Mishra Reader Beta 4: Additional bug fixes and logging in this release to try to find the reason some users aren't able to see the main window pop up. Also, a few UI tweaks to tighten up the feed item list. This release requires the final version of .NET 4.5. If the ClickOnce installer doesn't work for you, please try the additional setup exe.Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.9.0: Release Notes Imporved framework architecture Improved the framework security More import/export formats and operations New WebPortal application which includes forum, new, blog, catalog, etc. UIs Improved WebAdmin app. Reports, navigation and search Perfomance optimization Improve Xenta.Catalog domain More plugin interfaces and plugin implementations Refactoring Windows Azure support and much more... Package Guide Source Code - package contains the source code Binaries...Json.NET: Json.NET 4.5 Release 9: New feature - Added JsonValueConverter New feature - Set a property's DefaultValueHandling to Ignore when EmitDefaultValue from DataMemberAttribute is false Fix - Fixed DefaultValueHandling.Ignore not igoring default values of non-nullable properties Fix - Fixed DefaultValueHandling.Populate error with non-nullable properties Fix - Fixed error when writing JSON for a JProperty with no value Fix - Fixed error when calling ToList on empty JObjects and JArrays Fix - Fixed losing deci...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.66: Just going to bite the bullet and rip off the band-aid... SEMI-BREAKING CHANGE! Well, it's a BREAKING change to those who already adjusted their projects to use the previous breaking change's ill-conceived renamed DLLs (versions 4.61-4.65). For those who had not adapted and were still stuck in this-doesn't-work-please-fix-me mode, this is more like a fixing change. The previous breaking change just broke too many people, I'm sorry to say. Renaming the DLL from AjaxMin.dll to AjaxMinLibrary.dl...DotNetNuke® Community Edition CMS: 07.00.00 CTP (Not for Production Use): NOTE: New Minimum Requirementshttp://www.dotnetnuke.com/Portals/25/Blog/Files/1/3418/Windows-Live-Writer-1426fd8a58ef_902C-MinimumVersionSupport_2.png Simplified InstallerThe first thing you will notice is that the installer has been updated. Not only have we updated the look and feel, but we also simplified the overall install process. You shouldn’t have to click through a series of screens in order to just get your website running. With the 7.0 installer we have taken an approach that a...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.2: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BIDS Helper: BIDS Helper 1.6.1: In addition to fixing a number of bugs that beta testers reported, this release includes the following new features for Tabular models in SQL 2012: New Features: Tabular Display Folders Tabular Translations Editor Tabular Sync Descriptions Fixed Issues: Biml issues 32849 fixing bug in Tabular Actions Editor Form where you type in an invalid action name which is a reserved word like CON or which is a duplicate name to another action 32695 - fixing bug in SSAS Sync Descriptions whe...Umbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...WordMat: WordMat v. 1.02: This version was used for the 2012 exam.Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...New Projects&*^&hjkdhasdas;!231231+_)_(asldjhdasjkdhasjkd!#$: aaaaaasdsadsbob.js: bob.js is a JavaScript framework which will help you design better JavaScript objects, and to write better JavaScript code in general.CS559: CS559 - Graphics project at UW MadisonEjemplodeProyectosII: El sisguiente es un ejemplo de pruebasHow to apply MVC to mobile project: I am setting up simple ASP NET MVC test project in order to demonstrate how to create cross-platform webdelivery backend with ASP.NET IntelsiWEB: Proyecto Intelsi WEBjass: xcxczzzzccxzcxcxzcxzcxMajo Proyecto de Prueba: Este proyecto es un proyectoManaged silverlight roaches: Just roaches for silverlight. nothing moreMetro Jalali Calendar: The Jalali Calendar for Windows 8 Metro User Interface. "Jalali Calendar" aka "Shamsi Calendar" aka "Iranian Calendar"MyBackupTool: Basic c# .net 4.0 library to help copy files / folders, very simple to use.NCT Inventory System: This is a sample Inventory System ProjectNDEF Library for Proximity APIs (NFC): Easily parse and create NDEF records with C# and the Windows Proximity APIs (NFC).OAMaster: Help company to manage their information of staff.Open School Management System: Open School Management System is an open source initiative that I am undertaking to contribute to the society that we live in.Oscilloscope Frequency Calculator: Oscilloscope Frequency Calculator Gives you the frequency when you have a analogue oscilloscopePCControl: testPersonal Expenses: A simple web application to track personal expenses.PMOM: project management tool on SharePointpostback tutorial: This is a tutorial project.ProjectJabbr910: pappaProyecto: Primer proyectoProyecto Cachanga: Demo proyectos 2Proyecto de Prueba: Demo proyecto 2Proyecto Integrador 2: CURSO : PROYECTO INTEGRADOR 2 DEMOProyecto Integrador Para Soluciones Empresariales: Demo Proyecto 2Proyecto Integrador Soluciones Empresariales II - 2012-2: Demo proyectos 2Proyecto Prueba: practica codeplexRaquelAzureProject: Test projectSILAS: Sistema de Información Local de Agua Y Saneamiento: Sistema de Información Local para el Área de Medio Ambiente y Recursos Naturales de la Municipalidad Provincial de CajamarcaSilasPro: Proyecto de proyecto integradorSistem LPK Pemkot Semarang versi x86: Sistem Laporan Pelaksanaan Kegiatan SKPD Kota Semarang khusus untuk komputer Windows 64 bitSSIS Compressed File Source and Destination Components: Compressed File Source and Destination Components for SSIS 2012.testscenairo7: kudutestTorrentTrafficInspector: IT Solution Georgia TowerDef: Ð? tài opensource c?a group môn CNPMUdpBBS: BBS that uses UDP messages for communication to allow small, simple clients to use IP connections to interact with the BBS.WPFYard: my wpf yardWPUtils: WPUtils provides out-of-box attached properties/behaviors to extent existing controls/components.XiSD: A simple XSD to C# code generation tool with support for included XSD files and data contract attributes.xxxxf32: xxx

    Read the article

  • Game loop and time tracking

    - by David Brown
    Maybe I'm just an idiot, but I've been trying to implement a game loop all day and it's just not clicking. I've read literally every article I could find on Google, but the problem is that they all use different timing mechanisms, which makes them difficult to apply to my particular situation (some use milliseconds, other use ticks, etc). Basically, I have a Clock object that updates each time the game loop executes. internal class Clock { public static long Timestamp { get { return Stopwatch.GetTimestamp(); } } public static long Frequency { get { return Stopwatch.Frequency; } } private long _startTime; private long _lastTime; private TimeSpan _totalTime; private TimeSpan _elapsedTime; /// <summary> /// The amount of time that has passed since the first step. /// </summary> public TimeSpan TotalTime { get { return _totalTime; } } /// <summary> /// The amount of time that has passed since the last step. /// </summary> public TimeSpan ElapsedTime { get { return _elapsedTime; } } public Clock() { Reset(); } public void Reset() { _startTime = Timestamp; _lastTime = 0; _totalTime = TimeSpan.Zero; _elapsedTime = TimeSpan.Zero; } public void Tick() { long currentTime = Timestamp; if (_lastTime == 0) _lastTime = currentTime; _totalTime = TimestampToTimeSpan(currentTime - _startTime); _elapsedTime = TimestampToTimeSpan(currentTime - _lastTime); _lastTime = currentTime; } public static TimeSpan TimestampToTimeSpan(long timestamp) { return TimeSpan.FromTicks( (timestamp * TimeSpan.TicksPerSecond) / Frequency); } } I based most of that on the XNA GameClock, but it's greatly simplified. Then, I have a Time class which holds various times that the Update and Draw methods need to know. public class Time { public TimeSpan ElapsedVirtualTime { get; internal set; } public TimeSpan ElapsedRealTime { get; internal set; } public TimeSpan TotalVirtualTime { get; internal set; } public TimeSpan TotalRealTime { get; internal set; } internal Time() { } internal Time(TimeSpan elapsedVirtualTime, TimeSpan elapsedRealTime, TimeSpan totalVirutalTime, TimeSpan totalRealTime) { ElapsedVirtualTime = elapsedVirtualTime; ElapsedRealTime = elapsedRealTime; TotalVirtualTime = totalVirutalTime; TotalRealTime = totalRealTime; } } My main class keeps a single instance of Time, which it should constantly update during the game loop. So far, I have this: private static void Loop() { do { Clock.Tick(); Time.TotalRealTime = Clock.TotalTime; Time.ElapsedRealTime = Clock.ElapsedTime; InternalUpdate(Time); InternalDraw(Time); } while (!_exitRequested); } The real time properties of the time class turn out great. Now I'd like to get a proper update/draw loop working so that the state is updated a variable number of times per frame, but at a fixed timestep. At the same time, the Time.TotalVirtualTime and Time.ElapsedVirtualTime should be updated accordingly. In addition, I intend for this to support multiplayer in the future, in case that makes any difference to the design of the game loop. Any tips or examples on how I could go about implementing this (aside from links to articles)?

    Read the article

  • Checkbox values to varchar via Spring

    - by iowatiger08
    I am trying to get a varchar message from a database to display the selected values of a checkbox field in a jsp for patient's medication's dosage frequency. The possible values will be saved in comma-delimited string in the varchar. For most form fields there is simply a one form value to one database field ratio, but in this case, I am needing to merge the values that would come as a string[] into the comma-delimited string and then when retrieving that record for that medication of that patient, display the selected values from the comma-delimited string as selected from the selectableDosageFrequencyList. You assistance in this is greatly appreciated as I am not sure what I am missing here. In the application context, I created the list of possible values as part of the ServiceBean. <property name="selectableDosageFrequencyList"> <set> <value>On an empty stomach</value> <value>Every other day</value> <value>4 times daily</value> <value>3 times daily</value> <value>Twice daily</value> <value>At bedtime</value> <value>With meal</value> <value>As needed</value> <value>Once daily</value> </set> </property> This is set up in the flow as requestscope. <view-state id="addEditMedication" model="medication"> <on-render> <set name="requestScope.selectableDosageFrequencyList" value="memberService.buildSelectableDosageFrequencyList(patient)" /> </on-render> ... <transition on="next" to="assessment" > <evaluate expression="memberService.updateMedication(patient, medication)" /> </transition> </view-state> I have helper methods in the memberService that need to be executed when the form is init and then when the form is completed. //get the form fields selected and build the new string for db public String setSelectedDosageFrequency(String [] dosageFrequencies){ String frequencies = null; if (dosageFrequencies != null){ for (String s : dosageFrequencies){ frequencies = frequencies + "," + s; } } return frequencies; } //get value from database and build selected Set public LinkedHashSet<String> getSelectedDosageFrequencyList(String dosageFrequency){ String copyOfDosages =dosageFrequency;//may not need to do this LinkedHashSet<String> setofSelectedDosageFrequency = new LinkedHashSet<String> (); while (copyOfDosages!= null && copyOfDosages.length()>0){ for (String aFrequency: selectableDosageFrequencyList){ if (copyOfDosages.contains(aFrequency)){ setofSelectedDosageFrequency.add(aFrequency); if (!copyOfDosages.equals(aFrequency) && copyOfDosages.endsWith(aFrequency)){ copyOfDosages.replaceAll(","+aFrequency, ""); }else if (!copyOfDosages.equals(aFrequency) && copyOfDosages.contains(aFrequency=",")){ copyOfDosages.replaceAll(aFrequency+",", ""); }else copyOfDosages.replaceAll(aFrequency, ""); copyOfDosages.trim(); } } } return setofSelectedDosageFrequency; } The Medication class that backs the form will have a variable for dosage-frequency as a string. private String dosageFrequency; The jsp I currently am doing this. <div class="formField"> <form:label path="dosageFrequency">Dosage Frequency</form:label> <ul class="multi-column double" style="width: 550px;"> <form:checkboxes path="dosageFrequency" items="${selectableDosageFrequencyList}" itemLabel="${selectableDosageFrequencyList}" element="li" /> </ul> </div>

    Read the article

  • How to copy this portion of a text file out and put into a hash using rails? (VATsim datafile)

    - by Rusty Broderick
    Hi I'm trying to work out how i can cut out the section between !CLIENTS and the '; ;' and then to parse it into a hash in order to make an xml file. Honestly have no idea how to do it. The file is as follows: vatsim-data.txt original file here ; Created at 30/12/2010 01:29:14 UTC by Data Server V4.0 ; ; Data is the property of VATSIM.net and is not to be used for commercial purposes without the express written permission of the VATSIM.net Founders or their designated agent(s ). ; ; Sections are: ; !GENERAL contains general settings ; !CLIENTS contains informations about all connected clients ; !PREFILE contains informations about all prefiled flight plans ; !SERVERS contains a list of all FSD running servers to which clients can connect ; !VOICE SERVERS contains a list of all running voice servers that clients can use ; ; Data formats of various sections are: ; !GENERAL section - VERSION is this data format version ; RELOAD is time in minutes this file will be updated ; UPDATE is the last date and time this file has been updated. Format is yyyymmddhhnnss ; ATIS ALLOW MIN is time in minutes to wait before allowing manual Atis refresh by way of web page interface ; CONNECTED CLIENTS is the number of clients currently connected ; !CLIENTS section - callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: ; !PREFILE section - callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: ; !SERVERS section - ident:hostname_or_IP:location:name:clients_connection_allowed: ; !VOICE SERVERS section - hostname_or_IP:location:name:clients_connection_allowed:type_of_voice_server: ; ; Field separator is : character ; ; !GENERAL: VERSION = 8 RELOAD = 2 UPDATE = 20101230012914 ATIS ALLOW MIN = 5 CONNECTED CLIENTS = 515 ; ; !VOICE SERVERS: voice2.vacc-sag.org:Nurnberg:Europe-CW:1:R: voice.vatsim.fi:Finland - Sponsored by Verkkokauppa.com and NBL Solutions:Finland:1:R: rw.liveatc.net:USA, California:Liveatc:1:R: rw1.vatpac.org:Melbourne, Australia:Oceania:1:R: spain.vatsim.net:Spain:Vatsim Spain Server:1:R: voice.nyartcc.org:Sponsored by NY ARTCC:NY-ARTCC:1:R: voice.zhuartcc.net:Sponsored by Houston ARTCC:ZHU-ARTCC:1:R: ; ; !CLIENTS: 01PD:1090811:prentis gibbs KJFK:PILOT::40.64841:-73.81030:15:0::0::::USA-E:100:1:1200::::::::::::::::::::20101230010851:28:30.1:1019: 4X-BRH:1074589:george sandoval LLJR:PILOT::50.05618:-125.84429:10819:206:C337/G:150:CYAL:FL120:CCI9:EUROPE-C2:100:1:6043:::2:I:110:110:1:26:2:59:: /T/:DCT:0:0:0:0:::20101230005323:129:29.76:1007: 50125:1109107:Dave Frew KEDU:PILOT::46.52736:-121.95317:23877:471:B/B744/F:530:KTCM:30000:KLSV:USA-E:100:1:7723:::1:I:0:116:0:0:0:0:::GPS DIRECT.:0:0:0:0:::20101230012346:164:29.769:1008: 85013:1126003:Dmitry Abramov UWWW:PILOT::76.53819:71.54782:33444:423:T/ZZZZ/G:500:UUDD:FL330:ULAA:EUROPE-C2:100:1:2200:::2:I:0:2139:0:0:0:0:ULLI::BITSA DCT WM/N0485S1010 DCT KS DCT NE R22 ULWW B153 LAPEK B210 SU G476 OLATA:0:0:0:0:::20101229215815:62:53.264:1803: ; ; !SERVERS: EUROPE-C2:88.198.19.202:Europe:Center Europe Server Two:1: ; ; END I want to format the html with the tags with client being the parent and the nested tags as follows: callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: Any help in solving this would be much appreciated!

    Read the article

  • How to obtain a random sub-datatable from another data table

    - by developerit
    Introduction In this article, I’ll show how to get a random subset of data from a DataTable. This is useful when you already have queries that are filtered correctly but returns all the rows. Analysis I came across this situation when I wanted to display a random tag cloud. I already had the query to get the keywords ordered by number of clicks and I wanted to created a tag cloud. Tags that are the most popular should have more chance to get picked and should be displayed larger than less popular ones. Implementation In this code snippet, there is everything you need. ' Min size, in pixel for the tag Private Const MIN_FONT_SIZE As Integer = 9 ' Max size, in pixel for the tag Private Const MAX_FONT_SIZE As Integer = 14 ' Basic function that retreives Tags from a DataBase Public Shared Function GetTags() As MediasTagsDataTable ' Simple call to the TableAdapter, to get the Tags ordered by number of clicks Dim dt As MediasTagsDataTable = taMediasTags.GetDataValide ' If the query returned no result, return an empty DataTable If dt Is Nothing OrElse dt.Rows.Count < 1 Then Return New MediasTagsDataTable End If ' Set the font-size of the group of data ' We are dividing our results into sub set, according to their number of clicks ' Example: 10 results -> [0,2] will get font size 9, [3,5] will get font size 10, [6,8] wil get 11, ... ' This is the number of elements in one group Dim groupLenth As Integer = CType(Math.Floor(dt.Rows.Count / (MAX_FONT_SIZE - MIN_FONT_SIZE)), Integer) ' Counter of elements in the same group Dim counter As Integer = 0 ' Counter of groups Dim groupCounter As Integer = 0 ' Loop througt the list For Each row As MediasTagsRow In dt ' Set the font-size in a custom column row.c_FontSize = MIN_FONT_SIZE + groupCounter ' Increment the counter counter += 1 ' If the group counter is less than the counter If groupLenth <= counter Then ' Start a new group counter = 0 groupCounter += 1 End If Next ' Return the new DataTable with font-size Return dt End Function ' Function that generate the random sub set Public Shared Function GetRandomSampleTags(ByVal KeyCount As Integer) As MediasTagsDataTable ' Get the data Dim dt As MediasTagsDataTable = GetTags() ' Create a new DataTable that will contains the random set Dim rep As MediasTagsDataTable = New MediasTagsDataTable ' Count the number of row in the new DataTable Dim count As Integer = 0 ' Random number generator Dim rand As New Random() While count < KeyCount Randomize() ' Pick a random row Dim r As Integer = rand.Next(0, dt.Rows.Count - 1) Dim tmpRow As MediasTagsRow = dt(r) ' Import it into the new DataTable rep.ImportRow(tmpRow) ' Remove it from the old one, to be sure not to pick it again dt.Rows.RemoveAt(r) ' Increment the counter count += 1 End While ' Return the new sub set Return rep End Function Pro’s This method is good because it doesn’t require much work to get it work fast. It is a good concept when you are working with small tables, let says less than 100 records. Con’s If you have more than 100 records, out of memory exception may occur since we are coping and duplicating rows. I would consider using a stored procedure instead.

    Read the article

  • SQL Authority News – Download Microsoft SQL Server 2014 Feature Pack and Microsoft SQL Server Developer’s Edition

    - by Pinal Dave
    Yesterday I attended the SQL Server Community Launch in Bangalore and presented on Performing an effective Presentation. It was a fun presentation and people very well received it. No matter on what subject, I present, I always end up talking about SQL. Here are two of the questions I had received during the event. Q1) I want to install SQL Server on my development server, where can we get it for free or at an economical price (I do not have MSDN)? A1) If you are not going to use your server in a production environment, you can just get SQL Server Developer’s Edition and you can read more about it over here. Here is another favorite question which I keep on receiving it during the event. Q2) I already have SQL Server installed on my machine, what are different feature pack should I install and where can I get them from. A2) Just download and install Microsoft SQL Server 2014 Service Pack. Here is the link for downloading it. The Microsoft SQL Server 2014 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server. It includes tool and components for Microsoft SQL Server 2014 and add-on providers for Microsoft SQL Server 2014. Here is the list of component this product contains: Microsoft SQL Server Backup to Windows Azure Tool Microsoft SQL Server Cloud Adapter Microsoft Kerberos Configuration Manager for Microsoft SQL Server Microsoft SQL Server 2014 Semantic Language Statistics Microsoft SQL Server Data-Tier Application Framework Microsoft SQL Server 2014 Transact-SQL Language Service Microsoft Windows PowerShell Extensions for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Shared Management Objects Microsoft Command Line Utilities 11 for Microsoft SQL Server Microsoft ODBC Driver 11 for Microsoft SQL Server – Windows Microsoft JDBC Driver 4.0 for Microsoft SQL Server Microsoft Drivers 3.0 for PHP for Microsoft SQL Server Microsoft SQL Server 2014 Transact-SQL ScriptDom Microsoft SQL Server 2014 Transact-SQL Compiler Service Microsoft System CLR Types for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Remote Blob Store SQL RBS codeplex samples page SQL Server Remote Blob Store blogs Microsoft SQL Server Service Broker External Activator for Microsoft SQL Server 2014 Microsoft OData Source for Microsoft SQL Server 2014 Microsoft Balanced Data Distributor for Microsoft SQL Server 2014 Microsoft Change Data Capture Designer and Service for Oracle by Attunity for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Master Data Service Add-in for Microsoft Excel Microsoft SQL Server StreamInsight Microsoft Connector for SAP BW for Microsoft SQL Server 2014 Microsoft SQL Server Migration Assistant Microsoft SQL Server 2014 Upgrade Advisor Microsoft OLEDB Provider for DB2 v5.0 for Microsoft SQL Server 2014 Microsoft SQL Server 2014 PowerPivot for Microsoft SharePoint 2013 Microsoft SQL Server 2014 ADOMD.NET Microsoft Analysis Services OLE DB Provider for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Analysis Management Objects Microsoft SQL Server Report Builder for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Reporting Services Add-in for Microsoft SharePoint Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >