Search Results

Search found 7086 results on 284 pages for 'explain'.

Page 263/284 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • quick java question

    - by j-unit-122
    private static char[] quicksort (char[] array , int left , int right) { if (left < right) { int p = partition(array , left, right); quicksort(array, left, p - 1 ); quicksort(array, p + 1 , right); } for (char i : array) System.out.print(i + ” ”); System.out.println(); return array; } private static int partition(char[] a, int left, int right) { char p = a[left]; int l = left + 1, r = right; while (l < r) { while (l < right && a[l] < p) l++; while (r > left && a[r] >= p) r--; if (l < r) { char temp = a[l]; a[l] = a[r]; a[r] = temp; } } a[left] = a[r]; a[r] = p; return r; } } hi guys just a quick question regarding the above coding, i know that the above coding returns the following B I G C O M P U T E R B C E G I M P U T O R B C E G I M P U T O R B C E G I M P U T O R B C E G I M P U T O R B C E G I M O P T U R B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U when the sequence BIGCOMPUTER is used but my question is can someone explain to me what is happening in the code and how? i know abit about the quick-sort algorithm but it doesnt seem to be the same in the above example.

    Read the article

  • grdb not working variables

    - by stupid_idiot
    hi, i know this is kinda retarded but I just can't figure it out. I'm debugging this: xor eax,eax mov ah,[var1] mov al,[var2] call addition stop: jmp stop var1: db 5 var2: db 6 addition: add ah,al ret the numbers that I find on addresses var1 and var2 are 0x0E and 0x07. I know it's not segmented, but that ain't reason for it to do such escapades, because the addition call works just fine. Could you please explain to me where is my mistake? I see the problem, dunno how to fix it yet though. The thing is, for some reason the instruction pointer starts at 0x100 and all the segment registers at 0x1628. To address the instruction the used combination is i guess [cs:ip] (one of the segment registers and the instruction pointer for sure). The offset to var1 is 0x10 (probably because from the begining of the code it's the 0x10th byte in order), i tried to examine the memory and what i got was: 1628:100 8 bytes 1628:108 8 bytes 1628:110 <- wtf? (assume another 8 bytes) 1628:118 ... whatever tricks are there in the memory [cs:var1] points somewhere else than in my code, which is probably where the label .data would usually address ds.... probably.. i don't know what is supposed to be at 1628:10 ok, i found out what caused the assness and wasted me whole fuckin day. the behaviour described above is just correct, the code is fully functional. what i didn't know is that grdb debugger for some reason sets the begining address to 0x100... the sollution is to insert the directive ORG 0x100 on the first line and that's the whole thing. the code was working because instruction pointer has the right address to first instruction and goes one by one, but your assembler doesn't know what effective address will be your program stored at so it pretty much remains relative to first line of the code which means all the variables (if not using label for data section) will remain pointing as if it started at 0x0. which of course wouldn't work with DOS. and grdb apparently emulates some DOS features... sry for the language, thx everyone for effort, hope this will spare someone's time if having the same problem... heheh.. at least now i know the reason why to use .data section :))))

    Read the article

  • Why do sockets not die when server dies? Why does a socket die when server is alive?

    - by Roman
    I try to play with sockets a bit. For that I wrote very simple "client" and "server" applications. Client: import java.net.*; public class client { public static void main(String[] args) throws Exception { InetAddress localhost = InetAddress.getLocalHost(); System.out.println("before"); Socket clientSideSocket = null; try { clientSideSocket = new Socket(localhost,12345,localhost,54321); } catch (ConnectException e) { System.out.println("Connection Refused"); } System.out.println("after"); if (clientSideSocket != null) { clientSideSocket.close(); } } } Server: import java.net.*; public class server { public static void main(String[] args) throws Exception { ServerSocket listener = new ServerSocket(12345); while (true) { Socket serverSideSocket = listener.accept(); System.out.println("A client-request is accepted."); } } } And I found a behavior that I cannot explain: I start a server, than I start a client. Connection is successfully established (client stops running and server is running). Then I close the server and start it again in a second. After that I start a client and it writes "Connection Refused". It seems to me that the server "remember" the old connection and does not want to open the second connection twice. But I do not understand how it is possible. Because I killed the previous server and started a new one! I do not start the server immediately after the previous one was killed (I wait like 20 seconds). In this case the server "forget" the socket from the previous server and accepts the request from the client. I start the server and then I start the client. Connection is established (server writes: "A client-request is accepted"). Then I wait a minute and start the client again. And server (which was running the whole time) accept the request again! Why? The server should not accept the request from the same client-IP and client-port but it does!

    Read the article

  • Learning... anything really

    - by WebDevHobo
    I'm particularly interested in Windows PowerShell, but here's a somewhat more general complaint: When asking for help on learning something new, be it a small subject on PHP or understanding a class in Java, what usually happens is that people direct me towards the documentation pages. What I'm looking for is somewhat of a course. A deep explanation of why something works the way it does. I know my basic programming, like Java and C#. I've never seen C or C++, though I have seen a bit of assembler. I know what the Stack and Heap are, how boxing and unboxing works, why you have to deep-copy an array instead of copying the pointer and some other things. Windows PowerShell on the other hand, I know nothing about. And I notice that when reading the small document or some code, I usually forget what it does or why it works. What I am looking for is preferably, a nice tutorial that explains the beginnings, the concepts, and goes to more difficult things at a steady pace. The only thing documentation can do is explain what a function does. That's no good to me since I don't know what I want to do yet. I could read about a thousand functions, and forget about most of them, because I don't need to implement them right after it. Randomly wandering through the documentation doesn't do me any good. So conclude, what is a good tutorial on Windows Powershell? One which explains in clear language what is happening, one which builds on previous things learned. I don't think googling this is a good idea. Doing a Google search on this would turn up numerous tutorials. And experience tells me that you have to look long and hard to find the gem you're looking for. That's why I'm asking here. Because this is the place where you can find more experienced people. Many of the PowerShell guys among you will know the good ones already, and by asking you, I avoid wasting time that could be spent learning. So to summarize: I will not google this!

    Read the article

  • MySQL LEFT OUTER JOIN virtual table

    - by user1707323
    I am working on a pretty complicated query let me try to explain it to you. Here is the tables that I have in my MySQL database: students Table --- `students` --- student_id first_name last_name current_status status_change_date ------------ ------------ ----------- ---------------- -------------------- 1 John Doe Active NULL 2 Jane Doe Retread 2012-02-01 students_have_courses Table --- `students_have_courses` --- students_student_id courses_course_id s_date e_date int_date --------------------- ------------------- ---------- ---------- ----------- 1 1 2012-01-01 2012-01-04 2012-01-05 1 2 2012-01-05 NULL NULL 2 1 2012-01-10 2012-01-11 NULL students_have_optional_courses Table --- `students_have_optional_courses` --- students_student_id optional_courses_opcourse_id s_date e_date --------------------- ------------------------------ ---------- ---------- 1 1 2012-01-02 2012-01-03 1 1 2012-01-06 NULL 1 5 2012-01-07 NULL Here is my query so far SELECT `students_and_courses`.student_id, `students_and_courses`.first_name, `students_and_courses`.last_name, `students_and_courses`.courses_course_id, `students_and_courses`.s_date, `students_and_courses`.e_date, `students_and_courses`.int_date, `students_have_optional_courses`.optional_courses_opcourse_id, `students_have_optional_courses`.s_date, `students_have_optional_courses`.e_date FROM ( SELECT `c_s_a_s`.student_id, `c_s_a_s`.first_name, `c_s_a_s`.last_name, `c_s_a_s`.courses_course_id, `c_s_a_s`.s_date, `c_s_a_s`.e_date, `c_s_a_s`.int_date FROM ( SELECT `students`.student_id, `students`.first_name, `students`.last_name, `students_have_courses`.courses_course_id, `students_have_courses`.s_date, `students_have_courses`.e_date, `students_have_courses`.int_date FROM `students` LEFT OUTER JOIN `students_have_courses` ON ( `students_have_courses`.`students_student_id` = `students`.`student_id` AND (( `students_have_courses`.`s_date` >= `students`.`status_change_date` AND `students`.current_status = 'Retread' ) OR `students`.current_status = 'Active') ) WHERE `students`.current_status = 'Active' OR `students`.current_status = 'Retread' ) `c_s_a_s` ORDER BY `c_s_a_s`.`courses_course_id` DESC ) `students_and_courses` LEFT OUTER JOIN `students_have_optional_courses` ON ( `students_have_optional_courses`.students_student_id = `students_and_courses`.student_id AND `students_have_optional_courses`.s_date >= `students_and_courses`.s_date AND `students_have_optional_courses`.e_date IS NULL ) GROUP BY `students_and_courses`.student_id; What I want to be returned is the student_id, first_name, and last_name for all Active or Retread students and then LEFT JOIN the highest course_id, s_date, e_date, and int_date for the those students where the s_date is since the status_change_date if status is 'Retread'. Then LEFT JOIN the highest optional_courses_opcourse_id, s_date, and e_date from the students_have_optional_courses TABLE where the students_have_optional_courses.s_date is greater or equal to the students_have_courses.s_date and the students_have_optional_courses.e_date IS NULL Here is what is being returned: student_id first_name last_name courses_course_id s_date e_date int_date optional_courses_opcourse_id s_date_1 e_date_1 ------------ ------------ ----------- ------------------- ---------- ---------- ------------ ------------------------------ ---------- ---------- 1 John Doe 2 2012-01-05 NULL NULL 1 2012-01-06 NULL 2 Jane Doe NULL NULL NULL NULL NULL NULL NULL Here is what I want being returned: student_id first_name last_name courses_course_id s_date e_date int_date optional_courses_opcourse_id s_date_1 e_date_1 ------------ ------------ ----------- ------------------- ---------- ---------- ------------ ------------------------------ ---------- ---------- 1 John Doe 2 2012-01-05 NULL NULL 5 2012-01-07 NULL 2 Jane Doe NULL NULL NULL NULL NULL NULL NULL Everything is working except one thing, I cannot seem to get the highest students_have_optional_courses.optional_courses_opcourse_id no matter how I form the query Sorry, I just solved this myself after writing this all out I think it helped me think of the solution. Here is the solution query: SELECT `students_and_courses`.student_id, `students_and_courses`.first_name, `students_and_courses`.last_name, `students_and_courses`.courses_course_id, `students_and_courses`.s_date, `students_and_courses`.e_date, `students_and_courses`.int_date, `students_optional_courses`.optional_courses_opcourse_id, `students_optional_courses`.s_date, `students_optional_courses`.e_date FROM ( SELECT `c_s_a_s`.student_id, `c_s_a_s`.first_name, `c_s_a_s`.last_name, `c_s_a_s`.courses_course_id, `c_s_a_s`.s_date, `c_s_a_s`.e_date, `c_s_a_s`.int_date FROM ( SELECT `students`.student_id, `students`.first_name, `students`.last_name, `students_have_courses`.courses_course_id, `students_have_courses`.s_date, `students_have_courses`.e_date, `students_have_courses`.int_date FROM `students` LEFT OUTER JOIN `students_have_courses` ON ( `students_have_courses`.`students_student_id` = `students`.`student_id` AND (( `students_have_courses`.`s_date` >= `students`.`status_change_date` AND `students`.current_status = 'Retread' ) OR `students`.current_status = 'Active') ) WHERE `students`.current_status = 'Active' OR `students`.current_status = 'Retread' ) `c_s_a_s` ORDER BY `c_s_a_s`.`courses_course_id` DESC ) `students_and_courses` LEFT OUTER JOIN ( SELECT * FROM `students_have_optional_courses` ORDER BY `students_have_optional_courses`.optional_courses_opcourse_id DESC ) `students_optional_courses` ON ( `students_optional_courses`.students_student_id = `students_and_courses`.student_id AND `students_optional_courses`.s_date >= `students_and_courses`.s_date AND `students_optional_courses`.e_date IS NULL ) GROUP BY `students_and_courses`.student_id;

    Read the article

  • What database table structure should I use for versions, codebases, deployables?

    - by Zac Thompson
    I'm having doubts about my table structure, and I wonder if there is a better approach. I've got a little database for version control repositories (e.g. SVN), the packages (e.g. Linux RPMs) built therefrom, and the versions (e.g. 1.2.3-4) thereof. A given repository might produce no packages, or several, but if there are more than one for a given repository then a particular version for that repository will indicate a single "tag" of the codebase. A particular version "string" might be used to tag a version of the source code in more than one repository, but there may be no relationship between "1.0" for two different repos. So if packages P and Q both come from repo R, then P 1.0 and Q 1.0 are both built from the 1.0 tag of repo R. But if package X comes from repo Y, then X 1.0 has no relationship to P 1.0. In my (simplified) model, I have the following tables (the x_id columns are auto-incrementing surrogate keys; you can pretend I'm using a different primary key if you wish, it's not really important): repository - repository_id - repository_name (unique) ... version - version_id - version_string (unique for a particular repository) - repository_id ... package - package_id - package_name (unique) - repository_id ... This makes it easy for me to see, for example, what are valid versions of a given package: I can join with the version table using the repository_id. However, suppose I would like to add some information to this database, e.g., to indicate which package versions have been approved for release. I certainly need a new table: package_version - version_id - package_id - package_version_released ... Again, the nature of the keys that I use are not really important to my problem, and you can imagine that the data column is "promotion_level" or something if that helps. My doubts arise when I realize that there's really a very close relationship between the version_id and the package_id in my new table ... they must share the same repository_id. Only a small subset of package/version combinations are valid. So I should have some kind of constraint on those columns, enforcing that ... ... I don't know, it just feels off, somehow. Like I'm including somehow more information than I really need? I don't know how to explain my hesitance here. I can't figure out which (if any) normal form I'm violating, but I also can't find an example of a schema with this sort of structure ... not being a DBA by profession I'm not sure where to look. So I'm asking: am I just being overly sensitive?

    Read the article

  • devise register confirmation

    - by mattherick
    hello! i have a user and an admin role in my project. i created my authentification with devise, really nice and goot tool for handling the authentification. in my admin role i don´t have any confirmation or something like that. it is really simple and doesn´t make problems. but in my user model i have following things: model: devise :database_authenticatable, :confirmable, :recoverable, :rememberable, :trackable, :validatable, :timeoutable, :registerable # Setup accessible (or protected) attributes for your model attr_accessible :email, :username, :prename, :surname, :phone, :street, :number, :location, :password, :password_confirmation and few validations, but they aren´t relevant this time. my migration looks like following one: class DeviseCreateUsers < ActiveRecord::Migration def self.up create_table(:users) do |t| t.database_authenticatable :null = false t.confirmable t.recoverable t.rememberable t.trackable t.timeoutable t.validateable t.string :username t.string :prename t.string :surname t.string :phone t.string :street t.integer :number t.string :location t.timestamps end add_index :users, :email, :unique => true add_index :users, :confirmation_token, :unique => true add_index :users, :reset_password_token, :unique => true add_index :users, :username, :unique => true add_index :users, :prename, :unique => false add_index :users, :surname, :unique => false add_index :users, :phone, :unique => false add_index :users, :street, :unique => false add_index :users, :number, :unique => false add_index :users, :location, :unique => false end def self.down drop_table :users end end into my route.rb I added following statements: map.devise_for :admins map.devise_for :users, :path_names = { :sign_up = "register", :sign_in = "login" } map.root :controller = "main" and now my problem.. if I register a new user, I fill in all my data in the register form and submit it. After that I get redirected to the controller main with the flash-notice "You have signed up successfully." And I am logged in. But I don´t want to be logged in, because I don´t have confirmed my new user account yet. If I open the console I see the last things in the logs and there I see the confirmation-mail and the text and all stuff, but I am already logged in... I can´t explain why, ... does somebody of you have an idea? If I copy out the confirmation-token from the logs and confirm my account, I can log in, but if I don´t confirm, I also can log in..

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • how to fix protocol violation in c#

    - by Jeremy Styers
    I have a c# "client" and a Java "server". The java server has a wsdl it serves to the client. So far it works for c# to make a request for the server to perform a soap action. My server gets the soap request executes the method and tries to return the result back to the client. When I send the response to c# however, I get "The server committed a protocol violation. Section=ResponseStatusLine". I have spent all day trying to fix this and have come up with nothing that works. If I explain what i did, this post would be very long, so I'll keep it brief. i Googled for hours and everything tells me my "response line" is correct. I tried shutting down Skype, rearranging the response line, adding things, taking things away, etc, etc. All to no avail. This is for a class assignment so no, I can not use apis to help. I must do everything manually on the server side. That means parsing by hand, creating the soap response and the http response by hand. Just thought you'd like to know that before you say to use something that does it for me. I even tried making sure my server was sending the correct header by creating a java client that "mimicked" the c# one so I could see what the server returned. However, it's returning exactly what i told it to send. I tried telling my java client to do the same thing but to an actuall running c# service, to see what a real service returns, and it returned basically the same thing. To be safe, I copied it's response and tried sending it to the c# client and it still threw the error. Can anyone help? I've tried all i can think of, including adding the useUnsafeHeaderParsing to my app config. Nothing is working though. I send it exactly what a real service sends it and it yells at me. I send it what i want and it yells. I'm sending this: "200 OK HTTP/1.0\r\n" + "Content-Length: 201\r\n" + "Cache-Control: private\r\n" + "Content-Type: text/xml; charset=utf-8\r\n\r\n";

    Read the article

  • Why does WebSharingAppDemo-CEProviderEndToEnd sample still need a client db connection after scope c

    - by Don
    I'm researching a way to build an n-tierd sync solution. From the WebSharingAppDemo-CEProviderEndToEnd sample it seems almost feasable however for some reason, the app will only sync if the client has a live SQL db connection. Can some one explain what I'm missing and how to sync without exposing SQL to the internet? The problem I'm experiencing is that when I provide a Relational sync provider that has an open SQL connection from the client, then it works fine but when I provide a Relational sync provider that has a closed but configured connection string, as in the example, I get an error from the WCF stating that the server did not receive the batch file. So what am I doing wrong? SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(); builder.DataSource = hostName; builder.IntegratedSecurity = true; builder.InitialCatalog = "mydbname"; builder.ConnectTimeout = 1; provider.Connection = new SqlConnection(builder.ToString()); // provider.Connection.Open(); **** un-commenting this causes the code to work** //create anew scope description and add the appropriate tables to this scope DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription(SyncUtils.ScopeName); //class to be used to provision the scope defined above SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning(); .... The error I get occurs in this part of the WCF code: public SyncSessionStatistics ApplyChanges(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, object changeData) { Log("ProcessChangeBatch: {0}", this.peerProvider.Connection.ConnectionString); DbSyncContext dataRetriever = changeData as DbSyncContext; if (dataRetriever != null && dataRetriever.IsDataBatched) { string remotePeerId = dataRetriever.MadeWithKnowledge.ReplicaId.ToString(); //Data is batched. The client should have uploaded this file to us prior to calling ApplyChanges. //So look for it. //The Id would be the DbSyncContext.BatchFileName which is just the batch file name without the complete path string localBatchFileName = null; if (!this.batchIdToFileMapper.TryGetValue(dataRetriever.BatchFileName, out localBatchFileName)) { //Service has not received this file. Throw exception throw new FaultException<WebSyncFaultException>(new WebSyncFaultException("No batch file uploaded for id " + dataRetriever.BatchFileName, null)); } dataRetriever.BatchFileName = localBatchFileName; } Any ideas?

    Read the article

  • What is the best practice to segment c#.net projects based on a single base project

    - by Anthony
    Honestly, I can't word my question any better without describing it. I have a base project (with all its glory, dlls, resources etc) which is a CMS. I need to use this project as a base for othe custom bake projects. This base project is to be maintained and updated among all custom bake projects. I use subversion (Collabnet and Tortise SVN) I have two questions: 1 - Can I use subversion to share the base project among other projects What I mean here is can I "Checkout" the base project into another "Checked Out" project and have both update and commit seperatley. So, to paint a picture, let's say I am working on a custom project and I modify the core/base prject in some way (which I know will suit the others) can I then commit those changes and upon doing so when I update the base project in the other "Checked out" resources will it pull the changes? In short, I would like not to have to manually deploy updated core files whenever I make changes into each seperate project. 2 - If I create a custom file (let's say an webcontrol or aspx page etc) can I have it compile seperatley from the base project Another tricky one to explain. When I publish my web application it creates DLLs based on the namespaces of projects attached to it. So I may have a number of DLLs including the "Website's" namespace DLL, which could simply be website. I want to be able to make a seperate, custom, control which does not compile into those DLLs as the custom files should not rely on those DLLS to run. Is it as simple to set a seperate namespace for those files like CustomFiles.ProjectName for example? Think of the whole idea as adding modules to the .NET project, I don't want the module's code in any of the core DLLs but I do need for module to be able to access the core dlls. (There is no need for the core project to access the module code as it should be one way only in theory, though I reckon it woould not be possible anyway without using JSON/SOAP or something like that, maybe I am wrong.) I want to create a pluggable environment much like that of Joomla/Wordpress as since PHP generally doesn't have to be compiled first I see this is the reason why all this is possible/easy. The idea is to allow pluggable themes, modules etc etc. (I haven't tried simply adding .NET themes after compile/publish but I am assuming this is possible anyway? OR does the compiler need to reference items in the files?)

    Read the article

  • vector related memory allocation question

    - by memC
    hi all, I am encountering the following bug. I have a class Foo . Instances of this class are stored in a std::vector vec of class B. in class Foo, I am creating an instance of class A by allocating memory using new and deleting that object in ~Foo(). the code compiles, but I get a crash at the runtime. If I disable delete my_a from desstructor of class Foo. The code runs fine (but there is going to be a memory leak). Could someone please explain what is going wrong here and suggest a fix? thank you! class A{ public: A(int val); ~A(){}; int val_a; }; A::A(int val){ val_a = val; }; class Foo { public: Foo(); ~Foo(); void createA(); A* my_a; }; Foo::Foo(){ createA(); }; void Foo::createA(){ my_a = new A(20); }; Foo::~Foo(){ delete my_a; }; class B { public: vector<Foo> vec; void createFoo(); B(){}; ~B(){}; }; void B::createFoo(){ vec.push_back(Foo()); }; int main(){ B b; int i =0; for (i = 0; i < 5; i ++){ std::cout<<"\n creating Foo"; b.createFoo(); std::cout<<"\n Foo created"; } std::cout<<"\nDone with Foo creation"; std::cout << "\nPress RETURN to continue..."; std::cin.get(); return 0; }

    Read the article

  • How do I 'globally' catch exceptions thrown in object instances.

    - by SleepyBobos
    I am currently writing a winforms application (C#). I am making use of the Enterprise Library Exception Handling Block, following a fairly standard approach from what I can see. IE : In the Main method of Program.cs I have wired up event handler to Application.ThreadException event etc. This approach works well and handles the applications exceptional circumstances. In one of my business objects I throw various exceptions in the Set accessor of one of the objects properties set { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum trim..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message..."); _minimumTrim = value; } My logic for this approach (without turning this into a 'when to throw exceptions' discussion) is simply that the business objects are responsible for checking business rule constraints and throwing an exception that can bubble up and be caught as required. It should be noted that in the UI of my application I do explictly check the values that the public property is being set to (and take action there displaying friendly dialog etc) but with throwing the exception I am also covering the situation where my business object may not be used by a UI eg : the Property is being set by another business object for example. Anyway I think you all get the idea. My issue is that these exceptions are not being caught by the handler wired up to Application.ThreadException and I don't understand why. From other reading I have done the Application.ThreadException event and it handler "... catches any exception that occurs on the main GUI thread". Are the exceptions being raised in my business object not in this thread? I have not created any new threads. I can get the approach to work if I update the code as follows, explicity calling the event handler that is wired to Application.ThreadException. This is the approach outlined in Enterprise Library samples. However this approach requires me to wrap any exceptions thrown in a try catch, something I was trying to avoid by using a 'global' handler to start with. try { if (value > MaximumTrim) throw new CustomExceptions.InvalidTrimValue("The value of the minimum..."); if (!availableSubMasterWidthSatisfiesAllPatterns(value)) throw new CustomExceptions.InvalidTrimValue("Another message"); _minimumTrim = value; } catch (Exception ex) { Program.ThreadExceptionHandler.ProcessUnhandledException(ex); } I have also investigated using wiring a handler up to AppDomain.UnhandledException event but this does not catch the exceptions either. I would be good if someone could explain to me why my exceptions are not being caught by my global exception handler in the first code sample. Is there another approach I am missing or am I stuck with wrapping code in try catch, shown above, as required?

    Read the article

  • Backbone.js (model instanceof Model) via Chrome Extension

    - by Leoncelot
    Hey guys, This is my first time ever posting on this site and the problem I'm about to pose is difficult to articulate due to the set of variables required to arrive at it. Let me just quickly explain the framework I'm working with. I'm building a Chrome Extension using jQuery, jQuery-ui, and Backbone The entire JS suite for the extension is written in CoffeeScript and I'm utilizing Rails and the asset pipeline to manage it all. This means that when I want to deploy my extension code I run rake assets:precompile and copy the resulting compressed JS to my extensions Directory. The nice thing about this approach is that I can actually run the extension js from inside my Rails app by including the library. This is basically the same as my extensions background.js file which injects the js as a content script. Anyway, the problem I've recently encountered was when I tried testing my extension on my buddy's site, whiskeynotes.com. What I was noticing is that my backbone models were being mangled upon adding them to their respective collections. So something like this.collection.add(new SomeModel) created some nonsense version of my model. This code eventually runs into Backbone's prepareModel code _prepareModel: function(model, options) { options || (options = {}); if (!(model instanceof Model)) { var attrs = model; options.collection = this; model = new this.model(attrs, options); if (!model._validate(model.attributes, options)) model = false; } else if (!model.collection) { model.collection = this; } return model; }, Now, in most of the sites on which I've tested the extension, the result is normal, however on my buddy's site the !(model instance Model) evaluates to true even though it is actually an instance of the correct class. The consequence is a super messed up version of the model where the model's attributes is a reference to the models collection (strange right?). Needless to say, all kinds of crazy things were happening afterward. Why this is occurring is beyond me. However changing this line (!(model instanceof Model)) to (!(model instanceof Backbone.Model)) seems to fix the problem. I thought maybe it had something to do with the Flot library (jQuery graph library) creating their own version of 'Model' but looking through the source yielded no instances of it. I'm just curious as to why this would happen. And does it make sense to add this little change to the Backbone source? Update: I just realized that the "fix" doesn't actually work. I can also add that my backbone Models are namespaced in a wrapping object so that declaration looks something like class SomeNamespace.SomeModel extends Backbone.Model

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • thread management in nbody code of cuda-sdk

    - by xnov
    When I read the nbody code in Cuda-SDK, I went through some lines in the code and I found that it is a little bit different than their paper in GPUGems3 "Fast N-Body Simulation with CUDA". My questions are: First, why the blockIdx.x is still involved in loading memory from global to share memory as written in the following code? for (int tile = blockIdx.y; tile < numTiles + blockIdx.y; tile++) { sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(blockIdx.x + q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : //this line positions[WRAP(blockIdx.x + tile, gridDim.x) * p + threadIdx.x]; //this line __syncthreads(); // This is the "tile_calculation" function from the GPUG3 article. acc = gravitation(bodyPos, acc); __syncthreads(); } isn't it supposed to be like this according to paper? I wonder why sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : positions[WRAP(tile, gridDim.x) * p + threadIdx.x]; Second, in the multiple threads per body why the threadIdx.x is still involved? Isn't it supposed to be a fix value or not involving at all because the sum only due to threadIdx.y if (multithreadBodies) { SX_SUM(threadIdx.x, threadIdx.y).x = acc.x; //this line SX_SUM(threadIdx.x, threadIdx.y).y = acc.y; //this line SX_SUM(threadIdx.x, threadIdx.y).z = acc.z; //this line __syncthreads(); // Save the result in global memory for the integration step if (threadIdx.y == 0) { for (int i = 1; i < blockDim.y; i++) { acc.x += SX_SUM(threadIdx.x,i).x; //this line acc.y += SX_SUM(threadIdx.x,i).y; //this line acc.z += SX_SUM(threadIdx.x,i).z; //this line } } } Can anyone explain this to me? Is it some kind of optimization for faster code?

    Read the article

  • invalid postback event instead of dropdown to datagrid

    - by rima
    I faced with funny situation. I created a page which is having some value, I set these value and control my post back event also. The problem is happening when I change a component index(ex reselect a combobox which is not inside my datagrid) then I dont know why without my page call the Page_Load it goes to create a new row in grid function and all of my parameter are null! I am just receiving null exception. So in other word I try to explain the situation: when I load my page I am initializing some parameter. then everything is working fine. in my page when I change selected item of my combo box, page suppose to go and run function related to that combo box, and call page_load, but it is not going there and it goes to rowcreated function. I am trying to illustrate part of my page. Please help me because I am not receiving any error except null exception and it triger wrong even which seems so complicated for me. public partial class W_CM_FRM_02 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (Page.IsPostBack && !loginFail) return; InitializeItems(); } } private void InitializeItems() { cols = new string[] { "v_classification_code", "v_classification_name" }; arrlstCMMM_CLASSIFICATION = (ArrayList)db.Select(cols, "CMMM_CLASSIFICATION", "v_classification_code <> 'N'", " ORDER BY v_classification_name"); } } protected void DGV_RFA_DETAILS_RowCreated(object sender, GridViewRowEventArgs e) { //db = (Database)Session["oCon"]; foreach (DataRow dr in arrlstCMMM_CLASSIFICATION) ((DropDownList)DGV_RFA_DETAILS.Rows[index].Cells[4].FindControl("OV_RFA_CLASSIFICATION")).Items.Add(new ListItem(dr["v_classification_name"].ToString(), dr["v_classification_code"].ToString())); } protected void V_CUSTOMER_SelectedIndexChanged(object sender, EventArgs e) { if (V_CUSTOMER.SelectedValue == "xxx" || V_CUSTOMER.SelectedValue == "ddd") V_IMPACTED_FUNCTIONS.Enabled = true; } } my form: <%@ Page Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="W_CM_FRM_02.aspx.cs" Inherits="W_CM_FRM_02" Title="W_CM_FRM_02" enableeventvalidation="false" EnableViewState="true"%> <td>Project name*</td> <td><asp:DropDownList ID="V_CUSTOMER" runat="server" AutoPostBack="True" onselectedindexchanged="V_CUSTOMER_SelectedIndexChanged" /></td> <td colspan = "8"> <asp:GridView ID="DGV_RFA_DETAILS" runat="server" ShowFooter="True" AutoGenerateColumns="False" CellPadding="1" ForeColor="#333333" GridLines="None" OnRowDeleting="grvRFADetails_RowDeleting" Width="100%" Style="text-align: left" onrowcreated="DGV_RFA_DETAILS_RowCreated"> <RowStyle BackColor="#FFFBD6" ForeColor="#333333" /> <Columns> <asp:BoundField DataField="ON_RowNumber" HeaderText="SNo" /> <asp:TemplateField HeaderText="RFA/RAD/Ticket No*"> <ItemTemplate> <asp:TextBox ID="OV_RFA_NO" runat="server" Width="120"></asp:TextBox> </ItemTemplate> </asp:TemplateField>

    Read the article

  • Correlation formula explanation needed d3.js

    - by divakar
    function getCorrelation(xArray, yArray) { alert(xArray); alert(yArray); function sum(m, v) {return m + v;} function sumSquares(m, v) {return m + v * v;} function filterNaN(m, v, i) {isNaN(v) ? null : m.push(i); return m;} // clean the data (because we know that some values are missing) var xNaN = _.reduce(xArray, filterNaN , []); var yNaN = _.reduce(yArray, filterNaN , []); var include = _.intersection(xNaN, yNaN); var fX = _.map(include, function(d) {return xArray[d];}); var fY = _.map(include, function(d) {return yArray[d];}); var sumX = _.reduce(fX, sum, 0); var sumY = _.reduce(fY, sum, 0); var sumX2 = _.reduce(fX, sumSquares, 0); var sumY2 = _.reduce(fY, sumSquares, 0); var sumXY = _.reduce(fX, function(m, v, i) {return m + v * fY[i];}, 0); var n = fX.length; var ntor = ( ( sumXY ) - ( sumX * sumY / n) ); var dtorX = sumX2 - ( sumX * sumX / n); var dtorY = sumY2 - ( sumY * sumY / n); var r = ntor / (Math.sqrt( dtorX * dtorY )); // Pearson ( http://www.stat.wmich.edu/s216/book/node122.html ) var m = ntor / dtorX; // y = mx + b var b = ( sumY - m * sumX ) / n; // console.log(r, m, b); return {r: r, m: m, b: b}; } I have finding correlation between the points i plot using this function which is not written by me. my xarray=[120,110,130,132,120,118,134,105,120,0,0,0,0,137,125,120,127,120,160,120,148] yarray=[80,70,70,80,70,62,69,70,70,62,90,42,80,72,0,0,0,0,78,82,68,60,58,82,60,76,86,82,70] I can t able to understand the function perfectly. Can anybody explain it with the data i pasted here. I also wanted to remove the zeros getting calculated from this function.

    Read the article

  • Memory leaks getting sub-images from video (cvGetSubRect)

    - by dnul
    Hi, i'm trying to do video windowing that is: show all frames from a video and also some sub-image from each frame. This sub-image can change size and be taken from a different position of the original frame. So , the code i've written does basically this: cvQueryFrame to get a new image from the video Create a new IplImage (img) with sub-image dimensions ( window.height,window.width) Create a new Cvmat (mat) with sub-image dimensions ( window.height,window.width) CvGetSubRect(originalImage,mat,window) seizes the sub-image transform Mat (cvMat) to img (IplImage) using cvGetImage my problem is that for each frame i create new IplImage and cvMat which take a lot of memory and when i try to free the allocated memory I get a segmentation fault or in the case of the CvMat the allocated space does not get free (valgrind keeps telling me its definetly lost space). the following code does it: int main(void){ CvCapture* capture; CvRect window; CvMat * tmp; //window size window.x=0;window.y=0;window.height=100;window.width=100; IplImage * src=NULL,*bk=NULL,* sub=NULL; capture=cvCreateFileCapture( "somevideo.wmv"); while((src=cvQueryFrame(capture))!=NULL){ cvShowImage("common",src); //get sub-image sub=cvCreateImage(cvSize(window.height,window.width),8,3); tmp =cvCreateMat(window.height, window.width,CV_8UC1); cvGetSubRect(src, tmp , window); sub=cvGetImage(tmp, sub); cvShowImage("Window",sub); //free space if(bk!=NULL) cvReleaseImage(&bk); bk=sub; cvReleaseMat(&tmp); cvWaitKey(20); //window dimensions changes window.width++; window.height++; } } cvReleaseMat(&tmp); does not seem to have any effect on the total amount of lost memory, valgrind reports the same amount of "definetly lost" memory if i comment or uncomment this line. cvReleaseImage(&bk); produces a segmentation fault. notice i'm trying to free the previous sub-frame which i'm backing up in the bk variable. If i comment this line the program runs smoothly but with lots of memory leaks I really need to get rid of memory leaks, can anyone explain me how to correct this or even better how to correctly perform image windowing? Thank you

    Read the article

  • Unit Testing - Am I doing it right?

    - by baron
    Hi everyone, Basically I have been programing for a little while and after finishing my last project can fully understand how much easier it would have been if I'd have done TDD. I guess I'm still not doing it strictly as I am still writing code then writing a test for it, I don't quite get how the test becomes before the code if you don't know what structures and how your storing data etc... but anyway... Kind of hard to explain but basically lets say for example I have a Fruit objects with properties like id, color and cost. (All stored in textfile ignore completely any database logic etc) FruitID FruitName FruitColor FruitCost 1 Apple Red 1.2 2 Apple Green 1.4 3 Apple HalfHalf 1.5 This is all just for example. But lets say I have this is a collection of Fruit (it's a List<Fruit>) objects in this structure. And my logic will say to reorder the fruitids in the collection if a fruit is deleted (this is just how the solution needs to be). E.g. if 1 is deleted, object 2 takes fruit id 1, object 3 takes fruit id2. Now I want to test the code ive written which does the reordering, etc. How can I set this up to do the test? Here is where I've got so far. Basically I have fruitManager class with all the methods, like deletefruit, etc. It has the list usually but Ive changed hte method to test it so that it accepts a list, and the info on the fruit to delete, then returns the list. Unit-testing wise: Am I basically doing this the right way, or have I got the wrong idea? and then I test deleting different valued objects / datasets to ensure method is working properly. [Test] public void DeleteFruit() { var fruitList = CreateFruitList(); var fm = new FruitManager(); var resultList = fm.DeleteFruitTest("Apple", 2, fruitList); //Assert that fruitobject with x properties is not in list ? how } private static List<Fruit> CreateFruitList() { //Build test data var f01 = new Fruit {Name = "Apple",Id = 1, etc...}; var f02 = new Fruit {Name = "Apple",Id = 2, etc...}; var f03 = new Fruit {Name = "Apple",Id = 3, etc...}; var fruitList = new List<Fruit> {f01, f02, f03}; return fruitList; }

    Read the article

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • How to infer the type of a derived class in base class?

    - by enzi
    I want to create a method that allows me to change arbitrary properties of classes that derive from my base class, the result should look like this: SetPropertyValue("size.height", 50); – where size is a property of my derived class and height is a property of size. I'm almost done with my implementation but there's one final obstacle that I want to solve before moving on, to describe this I will first have to explain my implementation a bit: Properties that can be modified are decorated with an attribute There's a method in my base class that searches for all derived classes and their decorated properties For each property I generate a "property modifier", a class that contains 2 delegates: one to set and one to get the value of the property. Property Modifiers are stored in a dictionary, with the name of the property as key In my base class, there is another dictionary that contains all property-modifier-dictionaries, with the Type of the respective class as key. What the SetPropertyValue method does is this: Get the correct property-modifier-dictionary, using the concrete type of the derived class (<- yet to solve) Get the property modifier of the property to change (e.g. of the property size) Use the get or set delegate to modify the property's value Some example code to clarify further: private static Dictionary<RuntimeTypeHandle, object> EditableTypes; //property-modifier-dictionary protected void SetPropertyValue<T>(EditablePropertyMap<T> map, string property, object value) { var property = map[property]; // get the property modifier property.Set((T)this, value); // use the set delegate (encapsulated in a method) } In the above code, T is the Type of the actual (derived) class. I need this type for the get/set delegates. The problem is how to get the EditablePropertyMap<T> when I don't know what T is. My current (ugly) solution is to pass the map in an overriden virtual method in the derived class: public override void SetPropertyValue(string property, object value) { base.SetPropertyValue((EditablePropertyMap<ExampleType>)EditableTypes[typeof(ExampleType)], property, value); } What this does is: get the correct dictionary containing the property modifiers of this class using the class's type, cast it to the appropiate type and pass it to the SetPropertyValue method. I want to get rid of the SetPropertyValue method in my derived class (since there are a lot of derived classes), but don't know yet how to accomplish that. I cannot just make a virtual GetEditablePropertyMap<T> method because I cannot infer a concrete type for T then. I also cannot acces my dictionary directly with a type and retrieve an EditablePropertyMap<T> from it because I cannot cast to it from object in the base class, since again I do not know T. I found some neat tricks to infere types (e.g. by adding a dummy T parameter), but cannot apply them to my specific problem. I'd highly appreciate any suggestions you may have for me.

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • Strange behavior of std::cout &operator<<...

    - by themoondothshine
    Hey ppl, I came across something weird today, and I was wondering if any of you here could explain what's happening... Here's a sample: #include <iostream> #include <cassert> using namespace std; #define REQUIRE_STRING(s) assert(s != 0) #define REQUIRE_STRING_LEN(s, n) assert(s != 0 || n == 0) class String { public: String(const char *str, size_t len) : __data(__construct(str, len)), __len(len) {} ~String() { __destroy(__data); } const char *toString() const { return const_cast<const char *>(__data); } String &toUpper() { REQUIRE_STRING_LEN(__data, __len); char *it = __data; while(it < __data + __len) { if(*it >= 'a' && *it <= 'z') *it -= 32; ++it; } return *this; } String &toLower() { REQUIRE_STRING_LEN(__data, __len); char *it = __data; while(it < __data + __len) { if(*it >= 'A' && *it <= 'Z') *it += 32; ++it; } return *this; } private: char *__data; size_t __len; protected: static char *__construct(const char *str, size_t len) { REQUIRE_STRING_LEN(str, len); char *data = new char[len]; std::copy(str, str + len, data); return data; } static void __destroy(char *data) { REQUIRE_STRING(data); delete[] data; } }; int main() { String s("Hello world!", __builtin_strlen("Hello world!")); cout << s.toLower().toString() << endl; cout << s.toUpper().toString() << endl; cout << s.toLower().toString() << endl << s.toUpper().toString() << endl; return 0; } Now, I had expected the output to be: hello world! HELLO WORLD! hello world! HELLO WORLD! but instead I got this: hello world! HELLO WORLD! hello world! hello world! I can't really understand why the second toUpper didn't have any effect.

    Read the article

  • ACL implementation

    - by Kirzilla
    First question Please, could you explain me how simpliest ACL could be implemented in MVC. Here is the first approach of using Acl in Controller... <?php class MyController extends Controller { public function myMethod() { //It is just abstract code $acl = new Acl(); $acl->setController('MyController'); $acl->setMethod('myMethod'); $acl->getRole(); if (!$acl->allowed()) die("You're not allowed to do it!"); ... } } ?> It is very bad approach, and it's minus is that we have to add Acl piece of code into each controller's method, but we don't need any additional dependencies! Next approach is to make all controller's methods private and add ACL code into controller's __call method. <?php class MyController extends Controller { private function myMethod() { ... } public function __call($name, $params) { //It is just abstract code $acl = new Acl(); $acl->setController(__CLASS__); $acl->setMethod($name); $acl->getRole(); if (!$acl->allowed()) die("You're not allowed to do it!"); ... } } ?> It is better than previous code, but main minuses are... All controller's methods should be private We have to add ACL code into each controller's __call method. The next approach is to put Acl code into parent Controller, but we still need to keep all child controller's methods private. What is the solution? And what is the best practice? Where should I call Acl functions to decide allow or disallow method to be executed. Second question Second question is about getting role using Acl. Let's imagine that we have guests, users and user's friends. User have restricted access to viewing his profile that only friends can view it. All guests can't view this user's profile. So, here is the logic.. we have to ensure that method being called is profile we have to detect owner of this profile we have to detect is viewer is owner of this profile or no we have to read restriction rules about this profile we have to decide execute or not execute profile method The main question is about detecting owner of profile. We can detect who is owner of profile only executing model's method $model-getOwner(), but Acl do not have access to model. How can we implement this? I hope that my thoughts are clear. Sorry for my English. Thank you.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >