Search Results

Search found 20799 results on 832 pages for 'long integer'.

Page 129/832 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • A question about TBB/C++ code

    - by Jackie
    I am reading The thread building block book. I do not understand this piece of code: FibTask& a=*new(allocate_child()) FibTask(n-1,&x); FibTask& b=*new(allocate_child()) FibTask(n-2,&y); What do these directive mean? class object reference and new work together? Thanks for explanation. The following code is the defination of this class FibTask. class FibTask: public task { public: const long n; long* const sum; FibTask(long n_,long* sum_):n(n_),sum(sum_) {} task* execute() { if(n FibTask& a=*new(allocate_child()) FibTask(n-1,&x); FibTask& b=*new(allocate_child()) FibTask(n-2,&y); set_ref_count(3); spawn(b); spawn_and_wait_for_all(a); *sum=x+y; } return 0; } };

    Read the article

  • Strange performance behaviour

    - by plastilino
    I'm puzzled with this. In my machine Direct calculation: 375 ms Method calculation: 3594 ms, about TEN times SLOWER If I place the method calulation BEFORE the direct calculation, both times are SIMILAR. Woud you check it in your machine? class Test { static long COUNT = 50000 * 10000; private static long BEFORE; /*--------METHOD---------*/ public static final double hypotenuse(double a, double b) { return Math.sqrt(a * a + b * b); } /*--------TIMER---------*/ public static void getTime(String text) { if (BEFORE == 0) { BEFORE = System.currentTimeMillis(); return; } long now = System.currentTimeMillis(); long elapsed = (now - BEFORE); BEFORE = System.currentTimeMillis(); if (text.equals("")) { return; } String message = "\r\n" + text + "\r\n" + "Elapsed time: " + elapsed + " ms"; System.out.println(message); } public static void main(String[] args) { double a = 0.2223221101; double b = 122333.167; getTime(""); /*--------DIRECT CALCULATION---------*/ for (int i = 1; i < COUNT; i++) { Math.sqrt(a * a + b * b); } getTime("Direct: "); /*--------METHOD---------*/ for (int k = 1; k < COUNT; k++) { hypotenuse(a, b); } getTime("Method: "); } }

    Read the article

  • I just wanted to DES 4096 bytes of data with a 128 bits key...

    - by badp
    ...and what the nice folks at OpenSSL gratiously provide me with is this. :) Now, since you shouldn't be guessing when using cryptography, I come here for confirmation: what is the function call I want to use? What I understood A 128 bits key is 16 byte large, so I'll need double DES (2 × 8 byte). This leaves me with only a few function calls: void DES_ede2_cfb64_encrypt(const unsigned char *in, unsigned char *out, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int *num, int enc); void DES_ede2_cbc_encrypt(const unsigned char *input, unsigned char *output, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int enc); void DES_ede2_cfb64_encrypt(const unsigned char *in, unsigned char *out, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int *num, int enc); void DES_ede2_ofb64_encrypt(const unsigned char *in, unsigned char *out, long length, DES_key_schedule *ks1, DES_key_schedule *ks2, DES_cblock *ivec, int *num); In this case, I guess the function I want to call DES_ede2_cfb64_encrypt, although I'm not so sure -- I definitely don't need padding here and I'd have to care about what ivec and num are, and how I want to generate them... What am I missing?

    Read the article

  • Convert VBA to VBS

    - by dnLL
    I have a little VBA script with some functions that I would like to convert to a single VBS file. Here is an example of what I got: Private Declare Function GetPrivateProfileString Lib "kernel32" Alias "GetPrivateProfileStringA" (ByVal lpApplicationName As String, ByVal lpKeyName As Any, ByVal lpDefault As String, ByVal lpReturnedString As String, ByVal nSize As Long, ByVal lpFileName As String) As Long Private Function ReadIniFileString(ByVal Sect As String, ByVal Keyname As String) As String Dim Worked As Long Dim RetStr As String * 128 Dim StrSize As Long Dim iNoOfCharInIni As Integer Dim sIniString, sProfileString As String iNoOfCharInIni = 0 sIniString = "" If Sect = "" Or Keyname = "" Then MsgBox "Erreur lors de la lecture des paramètres dans " & IniFileName, vbExclamation, "INI" Access.Application.Quit Else sProfileString = "" RetStr = Space(128) StrSize = Len(RetStr) Worked = GetPrivateProfileString(Sect, Keyname, "", RetStr, StrSize, IniFileName) If Worked Then iNoOfCharInIni = Worked sIniString = Left$(RetStr, Worked) End If End If ReadIniFileString = sIniString End Function And then, I need to use this function to put some values in strings. VBS doesn't seem to like any of my var declaration ((Dim) MyVar As MyType). If I'm able to adapt that code to VBS, I should be able to do the rest of my functions too. How can I adapt/convert this to VBS? Thank you.

    Read the article

  • Project Euler #119 Make Faster

    - by gangqinlaohu
    Trying to solve Project Euler problem 119: The number 512 is interesting because it is equal to the sum of its digits raised to some power: 5 + 1 + 2 = 8, and 8^3 = 512. Another example of a number with this property is 614656 = 28^4. We shall define an to be the nth term of this sequence and insist that a number must contain at least two digits to have a sum. You are given that a2 = 512 and a10 = 614656. Find a30. Question: Is there a more efficient way to find the answer than just checking every number until a30 is found? My Code int currentNum = 0; long value = 0; for (long a = 11; currentNum != 30; a++){ //maybe a++ is inefficient int test = Util.sumDigits(a); if (isPower(a, test)) { currentNum++; value = a; System.out.println(value + ":" + currentNum); } } System.out.println(value); isPower checks if a is a power of test. Util.sumDigits: public static int sumDigits(long n){ int sum = 0; String s = "" + n; while (!s.equals("")){ sum += Integer.parseInt("" + s.charAt(0)); s = s.substring(1); } return sum; } program has been running for about 30 minutes (might be overflow on the long). Output (so far): 81:1 512:2 2401:3 4913:4 5832:5 17576:6 19683:7 234256:8 390625:9 614656:10 1679616:11 17210368:12 34012224:13 52521875:14 60466176:15 205962976:16 612220032:17

    Read the article

  • Pointer reference and dereference

    - by ZhekakehZ
    I have the following code: #include <iostream> char ch[] = "abcd"; int main() { std::cout << (long)(int*)(ch+0) << ' ' << (long)(int*)(ch+1) << ' ' << (long)(int*)(ch+2) << ' ' << (long)(int*)(ch+3) << std::endl; std::cout << *(int*)(ch+0) << ' ' << *(int*)(ch+1) << ' ' << *(int*)(ch+2) << ' ' << *(int*)(ch+3) << std::endl; std::cout << int('abcd') << ' ' << int('bcd') << ' ' << int('cd') << ' ' << int('d') << std::endl; } My question is why the pointer of 'd' is 100 ? I think it should be: int('d') << 24; //plus some trash on stack after ch And the question is why the second and the third line of the stdout are different ? 6295640 6295641 6295642 6295643 1684234849 6579042 25699 100 1633837924 6447972 25444 100 Thanks.

    Read the article

  • How does java.util.Collections.contains() perform faster than a linear search?

    - by The111
    I've been fooling around with a bunch of different ways of searching collections, collections of collections, etc. Doing lots of stupid little tests to verify my understanding. Here is one which boggles me (source code further below). In short, I am generating N random integers and adding them to a list. The list is NOT sorted. I then use Collections.contains() to look for a value in the list. I intentionally look for a value that I know won't be there, because I want to ensure that the entire list space is probed. I time this search. I then do another linear search manually, iterating through each element of the list and checking if it matches my target. I also time this search. On average, the second search takes 33% longer than the first one. By my logic, the first search must also be linear, because the list is unsorted. The only possibility I could think of (which I immediately discard) is that Java is making a sorted copy of my list just for the search, but (1) I did not authorize that usage of memory space and (2) I would think that would result in MUCH more significant time savings with such a large N. So if both searches are linear, they should both take the same amount of time. Somehow the Collections class has optimized this search, but I can't figure out how. So... what am I missing? import java.util.*; public class ListSearch { public static void main(String[] args) { int N = 10000000; // number of ints to add to the list int high = 100; // upper limit for random int generation List<Integer> ints; int target = -1; // target will not be found, forces search of entire list space long start; long end; ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 1)... "); if (ints.contains(target)) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); System.out.println(); ints = new ArrayList<Integer>(); start = System.currentTimeMillis(); System.out.print("Generating new list... "); for (int i = 0; i < N; i++) { ints.add(((int) (Math.random() * high)) + 1); } end = System.currentTimeMillis(); System.out.println("took " + (end-start) + "ms."); start = System.currentTimeMillis(); System.out.print("Searching list for target (method 2)... "); for (Integer i : ints) { // nothing } end = System.currentTimeMillis(); System.out.println(" Took " + (end-start) + "ms."); } }

    Read the article

  • how to pass arraylist as parameter to another screen

    - by user2867267
    how to pass arraylist as parameter to another activity in my condition im not using listview and checkif arraylist contain single element then pass that arrraylist as parameter toanother screen see thisline Category_name.get(position).toString()); how i remove position?? how to passs arraylist parameter toanother activity static ArrayList<Long> Menu_ID = new ArrayList<Long>(); static ArrayList<String> Category_name = new ArrayList<String>(); JSONArray school = json2.getJSONArray("data"); for (int i = 0; i < school.length(); i++) { JSONObject object = school.getJSONObject(i); Category_ID.add((long) i); Menu_ID.add(Long.parseLong(object.getString("menu_id"))); Category_name.add(object.getString("menu_title")); } Intent iMenuList = new Intent(MenuGroup.this, thirdstep.class); menuidvalue=""; menuidvalue =( Menu_ID.get(position)).toString(); iMenuList.putExtra("Menu_ID",menuidvalue); iMenuList.putExtra("menu_group", Category_name.get(position).toString()); startActivity(iMenuList);

    Read the article

  • SSIS – Delete all files except for the most recent one

    - by jorg
    Quite often one or more sources for a data warehouse consist of flat files. Most of the times these files are delivered as a zip file with a date in the file name, for example FinanceDataExport_20100528.zip Currently I work at a project that does a full load into the data warehouse every night. A zip file with some flat files in it is dropped in a directory on a daily basis. Sometimes there are multiple zip files in the directory, this can happen because the ETL failed or somebody puts a new zip file in the directory manually. Because the ETL isn’t incremental only the most recent file needs to be loaded. To implement this I used the simple code below; it checks which file is the most recent and deletes all other files. Note: In a previous blog post I wrote about unzipping zip files within SSIS, you might also find this useful: SSIS – Unpack a ZIP file with the Script Task Public Sub Main() 'Use this piece of code to loop through a set of files in a directory 'and delete all files except for the most recent one based on a date in the filename. 'File name example: 'DataExport_20100413.zip Dim rootDirectory As New DirectoryInfo(Dts.Variables("DirectoryFromSsisVariable").Value.ToString) Dim mostRecentFile As String = "" Dim currentFileDate As Integer Dim mostRecentFileDate As Integer = 0 'Check which file is the most recent For Each fi As FileInfo In rootDirectory.GetFiles("*.zip") currentFileDate = CInt(Left(Right(fi.Name, 12), 8)) 'Get date from current filename (based on a file that ends with: YYYYMMDD.zip) If currentFileDate > mostRecentFileDate Then mostRecentFileDate = currentFileDate mostRecentFile = fi.Name End If Next 'Delete all files except the most recent one For Each fi As FileInfo In rootDirectory.GetFiles("*.zip") If fi.Name <> mostRecentFile Then File.Delete(rootDirectory.ToString + "\" + fi.Name) End If Next Dts.TaskResult = ScriptResults.Success End Sub Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Kernel compile error with iw_ndis.c

    - by James
    Hi, I have a hp pavilion dm3t with intel HD graphics running ubuntu 10.10 64 bit. I'm trying to compile and install a patched kernel according to this, https://launchpad.net/~kamalmostafa/+archive/linux-kamal-mjgbacklight So I downloaded the tarball from here (linked to from the page above): http://kernel.ubuntu.com/git?p=kamal/ubuntu-maverick.git;a=shortlog;h=refs/heads/mjg-backlight I untar'd it to a directory, entered the directory and did: make defconfig I'm not sure if that's what I should have done but it was successful, so I did: make which seemed to work fine until it gave these errors: ubuntu/ndiswrapper/iw_ndis.c:1966: error: unknown field ‘num_private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1966: warning: initialization makes pointer from integer without a cast ubuntu/ndiswrapper/iw_ndis.c:1967: error: unknown field ‘num_private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: excess elements in struct initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: (near initialization for ‘ndis_handler_def’) ubuntu/ndiswrapper/iw_ndis.c:1970: error: unknown field ‘private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1970: warning: initialization makes integer from pointer without a cast ubuntu/ndiswrapper/iw_ndis.c:1970: error: initializer element is not computable at load time ubuntu/ndiswrapper/iw_ndis.c:1970: error: (near initialization for ‘ndis_handler_def.num_standard’) ubuntu/ndiswrapper/iw_ndis.c:1971: error: unknown field ‘private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1971: warning: initialization from incompatible pointer type make[2]: *** [ubuntu/ndiswrapper/iw_ndis.o] Error 1 make[1]: *** [ubuntu/ndiswrapper] Error 2 make: *** [ubuntu] Error 2 How can I compile and install this kernel successfully? I'm new to this and would appreciate any help.

    Read the article

  • Problem with SLATEC routine usage with gfortran

    - by user39461
    I am trying to compute the Bessel function of the second kind (Bessel_y) using the SLATEC's Amos library available on Netlib. Here is the SLATEC code I use. Below I have pasted my test program that calls SLATEC routine CBESY. PROGRAM BESSELTEST IMPLICIT NONE REAL:: FNU INTEGER, PARAMETER :: N = 2, KODE = 1 COMPLEX,ALLOCATABLE :: CWRK (:), CY (:) COMPLEX:: Z, ci INTEGER :: NZ, IERR ALLOCATE(CWRK(N), CY(N)) ci = cmplx (0.0, 1.0) FNU = 0.0e0 Z = CMPLX(0.3e0, 0.4e0) CALL CBESY(Z, FNU, KODE, N, CY, NZ, CWRK, IERR) WRITE(*,*) 'CY: ', CY WRITE(*,*) 'IERR: ', IERR STOP END PROGRAM And here is the output of the above program: CY: ( 5.78591091E-39, 5.80327020E-39) ( 0.0000000 , 0.0000000 ) IERR: 4 Ierr = 4 meaning there is some problem with the input itself. To be precise, the IERR = 4 means the following as per the header info in CBESY.f file: ! IERR=4, CABS(Z) OR FNU+N-1 TOO LARGE - NO COMPUTA- ! TION BECAUSE OF COMPLETE LOSSES OF SIGNIFI- ! CANCE BY ARGUMENT REDUCTION Clearly, CABS(Z) (which is 0.50) or FNU + N - 1 (which is 1.0) are not too large but still the routine CBESY throws the error message number 4 as above. The CY array should have following values for the argument given in above code: CY(1) = -0.4983 + 0.6700i CY(2) = -1.0149 + 0.9485i These values are computed using Matlab. I can't figure out what's the problem when I call CBESY from SLATEC library. Any clues? Much thanks for the suggestions/help. PS: if it is of any help, I used gfortran to compile, link and then create the SLATEC library file ( the .a file ) which I keep in the same directory as my test program above. shell command to execute above code: gfortran -c BesselTest.f95 gfortran -o a *.o libslatec.a a GD.

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • SQL SERVER – Difference between DATABASEPROPERTY and DATABASEPROPERTYEX

    - by pinaldave
    Earlier I asked a simple question on Facebook regarding difference between DATABASEPROPERTY and DATABASEPROPERTYEX in SQL Server. You can view the original conversation there over here. The conversion immediately became very interesting and lots of healthy discussion happened on facebook page. The best part of having conversation on facebook page is the comfort it provides and leaner commenting interface. Question Question from SQLAuthority.com: What is the difference between DATABASEPROPERTY and DATABASEPROPERTYEX in SQL Server? Answer Answer from Rakesh Kumar: DATABASEPROPERTY is supported for backward compatibility but does not provide information about the properties added in this release. Also, many properties supported by DATABASEPROPERTY have been replaced by new properties in DATABASEPROPERTYEX.- source (MSDN). Answer from Alphonso Jones: The only real difference I can see is one, the number of properties contained and the other is that EX returns a sql_variant while DATABASEPROPERTY returns only int. Answer from Ambati Venkatasiva: Both are system meta data functions. DATABASEPROPERTYEX Returns the current setting of the specified database option. DATABASEPROPERTYEX returns the sq-varient value and DATABASEPROPERTY returns integer value. Answer from Rama Sankar Molleti:  Here is the best example about databasepropertyex SELECT DATABASEPROPERTYEX('dbname', 'Collation') Result SQL_1xCompat_CP850_CI_AS Whereas with databaseproperty it retuns nothing as the return type for this is integer. Sql_variant datatype stores values of various sql server supported datatypes except text, ntext, image and timestamp. Answer from Alok Seth:  SELECT DATABASEPROPERTYEX('AdventureWorks', 'Status') DatabaseStatus_DATABASEPROPERTYEX GO --Result - ONLINE SELECT DATABASEPROPERTY('AdventureWorks', 'Status') DatabaseStatus_DATABASEPROPERTY GO --Result - NULL Summary Use DATABASEPROPERTYEX as it is the only function supported in future version as well it returns status of various database properties which does not exists with DATABASEPROPERTY. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Profiling Startup Of VS2012 &ndash; SpeedTrace Profiler

    - by Alois Kraus
    SpeedTrace is a relatively unknown profiler made a company called Ipcas. A single professional license does cost 449€+VAT. For the test I did use SpeedTrace 4.5 which is currently Beta. Although it is cheaper than dotTrace it has by far the most options to influence how profiling does work. First you need to create a tracing project which does configure tracing for one process type. You can start the application directly from the profiler or (much more interesting) it does attach to a specific process when it is started. For this you need to check “Trace the specified …” radio button and enter the process name in the “Process Name of the Trace” edit box. You can even selectively enable tracing for processes with a specific command line. Then you need to activate the trace project by pressing the Activate Project button and you are ready to start VS as usual. If you want to profile the next 10 VS instances that you start you can set the Number of Processes counter to e.g. 10. This is immensely helpful if you are trying to profile only the next 5 started processes. As you can see there are many more tabs which do allow to influence tracing in a much more sophisticated way. SpeedTrace is the only profiler which does not rely entirely on the profiling Api of .NET. Instead it does modify the IL code (instrumentation on the fly) to write tracing information to disc which can later be analyzed. This approach is not only very fast but it does give you unprecedented analysis capabilities. Once the traces are collected they do show up in your workspace where you can open the trace viewer. I do skip the other windows because this view is by far the most useful one. You can sort the methods not only by Wall Clock time but also by CPU consumption and wait time which none of the other products support in their views at the same time. If you want to optimize for CPU consumption sort by CPU time. If you want to find out where most time is spent you need Clock Total time and Clock Waiting. There you can directly see if the method did take long because it did wait on something or it did really execute stuff that did take so long. Once you have found a method you want to drill deeper you can double click on a method to get to the Caller/Callee view which is similar to the JetBrains Method Grid view. But this time you do see much more. In the middle is the clicked method. Above are the methods that call you and below are the methods that you do directly call. Normally you would then start digging deeper to find the end of the chain where the slow method worth optimizing is located. But there is a shortcut. You can press the magic   button to calculate the aggregation of all called methods. This is displayed in the lower left window where you can see each method call and how long it did take. There you can also sort to see if this call stack does only contain methods (e.g. WCF connect calls which you cannot make faster) not worth optimizing. YourKit has a similar feature where it is called Callees List. In the Functions tab you have in the context menu also many other useful analysis options One really outstanding feature is the View Call History Drilldown. When you select this one you get not a sum of all method invocations but a list with the duration of each method call. This is not surprising since SpeedTrace does use tracing to get its timings. There you can get many useful graphs how this method did behave over time. Did it become slower at some point in time or was only the first call slow? The diagrams and the list will tell you that. That is all fine but what should I do when one method call was slow? I want to see from where it was coming from. No problem select the method in the list hit F10 and you get the call stack. This is a life saver if you e.g. search for serialization problems. Today Serializers are used everywhere. You want to find out from where the 5s XmlSerializer.Deserialize call did come from? Hit F10 and you get the call stack which did invoke the 5s Deserialize call. The CPU timeline tab is also useful to find out where long pauses or excessive CPU consumption did happen. Click in the graph to get the Thread Stacks window where you can get a quick overview what all threads were doing at this time. This does look like the Stack Traces feature in YourKit. Only this time you get the last called method first which helps to quickly see what all threads were executing at this moment. YourKit does generate a rather long list which can be hard to go through when you have many threads. The thread list in the middle does not give you call stacks or anything like that but you see which methods were found most often executing code by the profiler which is a good indication for methods consuming most CPU time. This does sound too good to be true? I have not told you the best part yet. The best thing about this profiler is the staff behind it. When I do see a crash or some other odd behavior I send a mail to Ipcas and I do get usually the next day a mail that the problem has been fixed and a download link to the new version. The guys at Ipcas are even so helpful to log in to your machine via a Citrix Client to help you to get started profiling your actual application you want to profile. After a 2h telco I was converted from a hater to a believer of this tool. The fast response time might also have something to do with the fact that they are actively working on 4.5 to get out of the door. But still the support is by far the best I have encountered so far. The only downside is that you should instrument your assemblies including the .NET Framework to get most accurate numbers. You can profile without doing it but then you will see very high JIT times in your process which can severely affect the correctness of the measured timings. If you do not care about exact numbers you can also enable in the main UI in the Data Trace tab logging of method arguments of primitive types. If you need to know what files at which times were opened by your application you can find it out without a debugger. Since SpeedTrace does read huge trace files in its reader you should perhaps use a 64 bit machine to be able to analyze bigger traces as well. The memory consumption of the trace reader is too high for my taste. But they did promise for the next version to come up with something much improved.

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Current SPARC Architectures

    - by Darryl Gove
    Different generations of SPARC processors implement different architectures. The architecture that the compiler targets is controlled implicitly by the -xtarget flag and explicitly by the -arch flag. If an application targets a recent architecture, then the compiler gets to play with all the instructions that the new architecture provides. The downside is that the application won't work on older processors that don't have the new instructions. So for developer's there is a trade-off between performance and portability. The way we have solved this in the compiler is to assume a "generic" architecture, and we've made this the default behaviour of the compiler. The only flag that doesn't make this assumption is -fast which tells the compiler to assume that the build machine is also the deployment machine - so the compiler can use all the instructions that the build machine provides. The -xtarget=generic flag tells the compiler explicitly to use this generic model. We work hard on making generic code work well across all processors. So in most cases this is a very good choice. It is also of interest to know what processors support the various architectures. The following Venn diagram attempts to show this: A textual description is as follows: The T1 and T2 processors, in addition to most other SPARC processors that were shipped in the last 10+ years supported V9b, or sparcvis2. The SPARC64 processors from Fujitsu, used in the M-series machines, added support for the floating point multiply accumulate instruction in the sparcfmaf architecture. Support for this instruction also appeared in the T3 - this is called sparcvis3 Later SPARC64 processors added the integer multiply accumulate instruction, this architecture is sparcima. Finally the T4 includes support for both the integer and floating point multiply accumulate instructions in the sparc4 architecture. So the conclusion should be: Floating point multiply accumulate is supported in both the T-series and M-series machines, so it should be a relatively safe bet to start using it. The T4 is a very good machine to deploy to because it supports all the current instruction sets.

    Read the article

  • Compiling kernal problem

    - by James
    Hi, I have a hp pavilion dm3t with intel HD graphics running ubuntu 10.10 64 bit. I'm trying to compile and install a patched kernel according to this, https://launchpad.net/~kamalmostafa/+archive/linux-kamal-mjgbacklight So I downloaded the tarball from here (linked to from the page above): http://kernel.ubuntu.com/git?p=kamal/ubuntu-maverick.git;a=shortlog;h=refs/heads/mjg-backlight I untar'd it to a directory, entered the directory and did: make defconfig which was successful, so I did: make which seemed to work fine until it gave these errors: ubuntu/ndiswrapper/iw_ndis.c:1966: error: unknown field ‘num_private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1966: warning: initialization makes pointer from integer without a cast ubuntu/ndiswrapper/iw_ndis.c:1967: error: unknown field ‘num_private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: excess elements in struct initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: (near initialization for ‘ndis_handler_def’) ubuntu/ndiswrapper/iw_ndis.c:1970: error: unknown field ‘private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1970: warning: initialization makes integer from pointer without a cast ubuntu/ndiswrapper/iw_ndis.c:1970: error: initializer element is not computable at load time ubuntu/ndiswrapper/iw_ndis.c:1970: error: (near initialization for ‘ndis_handler_def.num_standard’) ubuntu/ndiswrapper/iw_ndis.c:1971: error: unknown field ‘private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1971: warning: initialization from incompatible pointer type make[2]: *** [ubuntu/ndiswrapper/iw_ndis.o] Error 1 make[1]: *** [ubuntu/ndiswrapper] Error 2 make: *** [ubuntu] Error 2 How can I compile and install this kernel successfully? I'm new to this and would appreciate any help.

    Read the article

  • Making LISPs manageable

    - by Andrea
    I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. But it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function timestamp return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. How can I learn to use these components effectively? Am I supposed to learn each small DSL as a black box?

    Read the article

  • MiniMax not working properly(for checkers game)

    - by engineer
    I am creating a checkers game but My miniMax is not functioning properly,it is always switching between two positions for its move(index 20 and 17).Here is my code: public double MiniMax(int[] board, int depth, int turn, int red_best, int black_best) { int source; int dest; double MAX_SCORE=-INFINITY,newScore; int MAX_DEPTH=3; int[] newBoard=new int[32]; generateMoves(board,turn); System.arraycopy(board, 0, newBoard, 0, 32); if(depth==MAX_DEPTH) { return Evaluation(turn,board);} for(int z=0;z<possibleMoves.size();z+=2){ source=Integer.parseInt(possibleMoves.elementAt(z).toString()); System.out.println("SOURCE= "+source); dest=Integer.parseInt(possibleMoves.elementAt(z+1).toString());//(int[])possibleMoves.elementAt(z+1); System.out.println("DEST = "+dest); applyMove(newBoard,source,dest); newScore=MiniMax(newBoard,depth+1,opponent(turn),red_best, black_best); if(newScore>MAX_SCORE) {MAX_SCORE=newScore;maxSource=source; maxDest=dest;}//maxSource and maxDest will be used to perform the move. if (MAX_SCORE > black_best) { if (MAX_SCORE >= red_best) break; /* alpha_beta cutoff */ else black_best = (int) MAX_SCORE; //the_score } if (MAX_SCORE < red_best) { if (MAX_SCORE<= black_best) break; /* alpha_beta cutoff */ else red_best = (int) MAX_SCORE; //the_score } }//for ends return MAX_SCORE; } //end minimax I am unable to find out the logical mistake. Any idea what's going wrong?

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Importing tab delimited file into array in Visual Basic 2013 [migrated]

    - by JaceG
    I am needing to import a tab delimited text file that has 11 columns and an unknown number of rows (always minimum 3 rows). I would like to import this text file as an array and be able to call data from it as needed, throughout my project. And then, to make things more difficult, I need to replace items in the array, and even add more rows to it as the project goes on (all at runtime). Hopefully someone can suggest code corrections or useful methods. I'm hoping to use something like the array style sMyStrings(3,2), which I believe would be the easiest way to control my data. Any help is gladly appreciated, and worthy of a slab of beer. Here's the coding I have so far: Imports System.IO Imports Microsoft.VisualBasic.FileIO Public Class Main Dim strReadLine As String Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim sReader As IO.StreamReader = Nothing Dim sRawString As String = Nothing Dim sMyStrings() As String = Nothing Dim intCount As Integer = -1 Dim intFullLoop As Integer = 0 If IO.File.Exists("C:\MyProject\Hardware.txt") Then ' Make sure the file exists sReader = New IO.StreamReader("C:\MyProject\Hardware.txt") Else MsgBox("File doesn't exist.", MsgBoxStyle.Critical, "Error") End End If Do While sReader.Peek >= 0 ' Make sure you can read beyond the current position sRawString = sReader.ReadLine() ' Read the current line sMyStrings = sRawString.Split(New Char() {Chr(9)}) ' Separate values and store in a string array For Each s As String In sMyStrings ' Loop through the string array intCount = intCount + 1 ' Increment If TextBox1.Text <> "" Then TextBox1.Text = TextBox1.Text & vbCrLf ' Add line feed TextBox1.Text = TextBox1.Text & s ' Add line to debug textbox If intFullLoop > 14 And intCount > -1 And CBool((intCount - 0) / 11 Mod 0) Then cmbSelectHinge.Items.Add(sMyStrings(intCount)) End If Next intCount = -1 intFullLoop = intFullLoop + 1 Loop End Sub

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >