Search Results

Search found 93649 results on 3746 pages for 'protector one'.

Page 549/3746 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • Git for beginners: The definitive practical guide

    - by Adam Davis
    Ok, after seeing this post by PJ Hyett, I have decided to skip to the end and go with git. So what I need is a beginners practical guide to git. "Beginner" being defined as someone who knows how to handle their compiler, understands to some level what a makefile is, and has touched source control without understanding it very well. "Practical" being defined as this person doesn't want to get into great detail regarding what git is doing in the background, and doesn't even care (or know) that it's distributed. Your answers might hint at the possibilities, but try to aim for the beginner that wants to keep a 'main' repository on a 'server' which is backed up and secure, and treat their local repository as merely a 'client' resource. Procedural note: PLEASE pick one and only one of the below topics and answer it clearly and concisely in any given answer. Don't try to jam a bunch of information into one answer. Don't just link to other resources - cut and paste with attribution if copyright allows, otherwise learn it and explain it in your own words (ie, don't make people leave this page to learn a task). Please comment on, or edit, an already existing answer unless your explanation is very different and you think the community is better served with a different explanation rather than altering the existing explanation. So: Installation/Setup How to install git How do you set up git? Try to cover linux, windows, mac, think 'client/server' mindset. Setup GIT Server with Msysgit on Windows How do you create a new project/repository? How do you configure it to ignore files (.obj, .user, etc) that are not really part of the codebase? Working with the code How do you get the latest code? How do you check out code? How do you commit changes? How do you see what's uncommitted, or the status of your current codebase? How do you destroy unwanted commits? How do you compare two revisions of a file, or your current file and a previous revision? How do you see the history of revisions to a file? How do you handle binary files (visio docs, for instance, or compiler environments)? How do you merge files changed at the "same time"? How do you undo (revert or reset) a commit? Tagging, branching, releases, baselines How do you 'mark' 'tag' or 'release' a particular set of revisions for a particular set of files so you can always pull that one later? How do you pull a particular 'release'? How do you branch? How do you merge branches? How do you resolve conflicts and complete the merge? How do you merge parts of one branch into another branch? What is rebasing? How do I track remote branches? How can I create a branch on a remote repository? Other Describe and link to a good gui, IDE plugin, etc that makes git a non-command line resource, but please list its limitations as well as its good. msysgit - Cross platform, included with git gitk - Cross platform history viewer, included with git gitnub - OS X gitx - OS X history viewer smartgit - Cross platform, commercial, beta tig - console GUI for Linux qgit - GUI for Windows, Linux Any other common tasks a beginner should know? Git Status tells you what you just did, what branch you have, and other useful information How do I work effectively with a subversion repository set as my source control source? Other git beginner's references git guide git book git magic gitcasts github guides git tutorial Progit - book by Scott Chacon Git - SVN Crash Course Delving into git Understanding git conceptually I will go through the entries from time to time and 'tidy' them up so they have a consistent look/feel and it's easy to scan the list - feel free to follow a simple "header - brief explanation - list of instructions - gotchas and extra info" template. I'll also link to the entries from the bullet list above so it's easy to find them later.

    Read the article

  • What is required to use LODSB in assembly?

    - by Harvey
    What is the minimum set of steps required to use LODSB to load a relative address to a string in my code? I have the following test program that I'm using PXE to boot. I boot it two ways: via pxelinux.0 and directly. If I boot it directly, my program prints both strings. If I boot via pxelinux.0, it only prints the first string. Why? Working technique (for both): Set the direction flag to increment, cld Set ds to cs Put the address (from start) of string in si Add the starting offset to si Non-working technique (just for pxelinux): Calculate a new segment address based on (((cs << 4) + offset) >> 4) Set ds to that. (either A000 or 07C0) text here to fix bug in markdown // Note: If you try this code, don't forget to set // the "#if 0" below appropriately! .text .globl start, _start start: _start: _start1: .code16 jmp real_start . = _start1 + 0x1fe .byte 0x55, 0xAA // Next sector . = _start1 + 0x200 jmp real_start test1_str: .asciz "\r\nTest: 9020:fe00" test2_str: .asciz "\r\nTest: a000:0000" real_start: cld // Make sure %si gets incremented. #if 0 // When loaded by pxelinux, we're here: // 9020:fe00 ==> a000:0000 // This works. movw $0x9020, %bx movw %bx, %ds movw $(test1_str - _start1), %si addw $0xfe00, %si call print_message // This does not. movw $0xA000, %bx movw %bx, %ds movw $(test2_str - _start1), %si call print_message #else // If we are loaded directly without pxelinux, we're here: // 0000:7c00 ==> 07c0:0000 // This works. movw $0x0000, %bx movw %bx, %ds movw $(test1_str - _start1), %si addw $0x7c00, %si call print_message // This does, too. movw $0x07c0, %bx movw %bx, %ds movw $(test2_str - _start1), %si call print_message #endif // Hang the computer sti 1: jmp 1b // Prints string DS:SI (modifies AX BX SI) print_message: pushw %ax jmp 2f 3: movb $0x0e, %ah /* print char in AL */ int $0x10 /* via TTY mode */ 2: lodsb (%si), %al /* get token */ cmpb $0, %al /* end of string? */ jne 3b popw %ax ret .balign 0x200 Here's the compilation: /usr/bin/ccache gcc -Os -fno-stack-protector -fno-builtin -nostdinc -DSUPPORT_SERIAL=1 -DSUPPORT_HERCULES=1 -DSUPPORT_GRAPHICS=1 -DHAVE_CONFIG_H -I. -Wall -ggdb3 -Wmissing-prototypes -Wunused -Wshadow -Wpointer-arith -falign-jumps=1 -falign-loops=1 -falign-functions=1 -Wundef -g -c -o ds_teststart_exec-ds_teststart.o ds_test.S /usr/bin/ccache gcc -g -o ds_teststart.exec -nostdlib -Wl,-N -Wl,-Ttext -Wl,8000 ds_teststart_exec-ds_teststart.o objcopy -O binary ds_teststart.exec ds_teststart

    Read the article

  • How is WPF Data Binding using Object Data Source in Visual Studio 2010 done?

    - by Rob Perkins
    This is probably mostly a question about how to use the VS 2010 IDE tools in a way the Microsofties didn't specifically intend. But since this is something I immediately tried without success. I have defined a .NET 4.0 WPF Application project with a simple class that looks like this: Public Class Class1 Public Property One As String = "OneString" Public Property Two As String = "TwoString" End Class I then defined it as an "Object Data Source" in VS2010, using the IDE's "Add New Data Source..." feature. This exposes the class members in a GUI element in the IDE as given in this image: Dragging "Class1" from that tool to the surface of "Window1.xaml" in a default "WPF Application" results in the design view looking like this: And generated XAML like this: <Window x:Class="Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="133" Width="170" xmlns:my="clr-namespace:WpfApplication1" mc:Ignorable="d" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" > <Window.Resources> <CollectionViewSource x:Key="Class1ViewSource" d:DesignSource="{d:DesignInstance my:Class1, CreateList=True}" /> </Window.Resources> <Grid DataContext="{StaticResource Class1ViewSource}" HorizontalAlignment="Left" Name="Grid1" VerticalAlignment="Top"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Label Content="One:" Grid.Column="0" Grid.Row="0" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="0" Height="23" HorizontalAlignment="Left" Margin="3" Name="OneTextBlock" Text="{Binding Path=One}" VerticalAlignment="Center" /> <Label Content="Two:" Grid.Column="0" Grid.Row="1" HorizontalAlignment="Left" Margin="3" VerticalAlignment="Center" /> <TextBlock Grid.Column="1" Grid.Row="1" Height="23" HorizontalAlignment="Left" Margin="3" Name="TwoTextBlock" Text="{Binding Path=Two}" VerticalAlignment="Center" /> </Grid> Note the data bindings Text="{Binding Path=One}" and Text="{Binding Path=Two}" in the TextBlock elements. Code-behind for Window1.xaml has this in Window_Loaded: Class Window1 Private m_c1 As New Class1 Private Sub Window1_Loaded(ByVal sender As Object, ByVal e As System.Windows.RoutedEventArgs) Handles Me.Loaded Dim Class1ViewSource As System.Windows.Data.CollectionViewSource = CType(Me.FindResource("Class1ViewSource"), System.Windows.Data.CollectionViewSource) 'Load data by setting the CollectionViewSource.Source property: 'Class1ViewSource.Source = [generic data source] Me.DataContext = m_c1 End Sub End Class Running the application produces this output: The expected result was that "OneString" would appear next to "One" and "TwoString" next to "Two" in the running window. The question is: Why didn't this work? What will work instead? If I put bindings in a DataTemplate, it works. Blend, with its sample data stuff, implied that this should work, but it doesn't. I know I'm missing something pretty fundamental here; what is it?

    Read the article

  • How to use c++0x thread in Android NDK?

    - by m-ric
    I am trying to compile this simple program with android-ndk-r8b: jni/hello_jni.cpp #include <iostream> #include <thread> void hello() { std::cout << "Hi i'm a thread!!!" << std::endl; } int main() { std::thread th(hello); th.join(); return 0; } jni/Application.mk APP_OPTIM := release APP_MODULES := hello_thread APP_STL := gnustl_static jni/Android.mk LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_CPPFLAGS += -std=c++0x -frtti LOCAL_MODULE := hello_thread LOCAL_LDLIBS := -L$(SYSROOT)/usr/lib -pthread LOCAL_SRC_FILES := hello_thread.cpp include $(BUILD_EXECUTABLE) ndk-build returns me an error arguin that 'thread' is not a member of 'std'. I issued ndk-build -n to get the compilation command and issued it alone in my shell: /home/evigier/android-ndk-r8b/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-g++ -MMD -MP -MF /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -march=armv5te -mtune=xscale -msoft-float -fno-exceptions -fno-rtti -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi/include -I/home/evigier/eclipse_workspace/hello_thread/jni -DANDROID -Wa,--noexecstack -std=c++0x -frtti -O2 -DNDEBUG -g -I/home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include -c /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp -o /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o Compile++ thumb : hello_thread <= hello_thread.cpp In file included from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/stdio.h:55:0, from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/wchar.h:33, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/cwchar:46, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/bits/postypes.h:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iosfwd:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ios:39, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ostream:40, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iostream:40, from jni/hello_thread.cpp:4: /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/sys/types.h:124:9: error: 'uint64_t' does not name a type /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp: In function 'int main()': /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:5: error: 'thread' is not a member of 'std' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:17: error: expected ';' before 'th' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:15:5: error: 'th' was not declared in this scope I read a lot of threads/questions about POSIX threads and C++ threads, but still cannot find my answer. My arm-linux-androideabi/include/c++/4.6/thread file defines class thread in std only: #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1) They don't seem to be defined in my sdk (c++config.h). But how can I possibly turn them on safely? Do i need to compile my own toolchain to use (non-p)threads? My host computer is : Linux evigier-ThinkPad-X220 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • What precautions should you take when a senior employee leaves?

    - by Mahin
    EDIT : I agree one should check the reasons, why a senior level employee is leaving. But I am interested in knowing the official/management/technical/legal steps one should take after its decided that he is leaving, so that the life after him is smooth. What are the steps management should take when a senior programmer/team lead leaves your company. Some of them which I have thought about are : 1) If He used to manage hosting and domains stuff, change passwords of domain control panels and hosting panels. 2) If your published web sites have maintenance account and he is aware of credentials of that account then change this details also. 3) Suspend mail account for some time and forward all eMails of that account to some ex-employee account. After some time close that account. What are the other things one should check. I am expecting the answer to be a general check list one should follow. It should include both technical scenarios and management scenarios. Notable Suggestions so far : Effectively transfer the responsibilities of that employee to another one without causing any potential delay in your work. Protect your source code. If possible Make them to sign something to say that they don't have copies of source code.. You can also consider NDA here. Use the Notice Period to train his replacement. Now any new code to the project will be done by replacement with the help of Guy who is leaving. Ask him to create a document of things he thinks you should know. Make sure he checks everything in now and then any checkout will only be done by the replacement. Emails, copy off his email account to a pst.file (this assumes Outlook), Make this file available to his replacement. the employee should probably be given a chance to scrub the email. if you are going to keep his account open for whatever reason, check that no rules are created that forward incoming emails to an alternate address. Copy the hard drive of his computer to a network location and have someone senior go through and see if there are any files (drafts of performance reviews or other sensitive issues ) on it that someone else might need. Clearance from Accounts,Finance,Security,Library etc departments.Obtain all company property, laptops, keys, etc. If there is no reason not to, you should reward a departing person for their many years of service. Write them letters of recommendation (even if they already have a new job lined up).Say goodbye, and keep the door open. Make sure any outside clients know that the departing employee is not their main contact anymore. Never neglect the exit interview/debriefing. Confirm the last day of employment so that there is no misunderstanding Inform H/R if the employee is on H1B status, there is paperwork required to notify the government when an H1B employee leaves. Depending on how senior / what position, you might spend some time convincing him not to take the rest of the engineering staff with him. Make sure he spends his last days on a good note, because if he is not leaving on a good note, he can easily pollute the mind of his colleagues. Best Regards, Mahin Gupta EDIT : Now offered a bounty on it to get more detailed responses and practical suggestions.

    Read the article

  • Fluent NHibernate - subclasses with shared reference

    - by ollie
    Edit: changed class names. I'm using Fluent NHibernate (v 1.0.0.614) automapping on the following set of classes (where Entity is the base class provided in the S#arp Architecture framework): public class Car : Entity { public virtual int ModelYear { get; set; } public virtual Company Manufacturer { get; set; } } public class Sedan : Car { public virtual bool WonSedanOfYear { get; set; } } public class Company : Entity { public virtual IList<Sedan> Sedans { get; set; } } This results in the following Configuration (as written to hbm.xml): <class name="Company" table="Companies"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <bag cascade="all" inverse="true" name="Sedans" mutable="true"> <key> <column name="`CompanyID`" /> </key> <one-to-many class="Sedan" /> </bag> </class> <class name="Car" table="Cars"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <property name="ModelYear" type="System.Int32"> <column name="`ModelYear`" /> </property> <many-to-one cascade="save-update" class="Company" name="Manufacturer"> <column name="`CompanyID`" /> </many-to-one> <joined-subclass name="Sedan"> <key> <column name="`CarID`" /> </key> <property name="WonSedanOfYear" type="System.Boolean"> <column name="`WonSedanOfYear`" /> </property> </joined-subclass> </class> So far so good! But now comes the ugly part. The generated database tables: Table: Companies Columns: ID (PK, int, not null) Table: Cars Columns: ID (PK, int, not null) ModelYear (int, null) CompanyID (FK, int, null) Table: Sedan Columns: CarID (PK, FK, int, not null) WonSedanOfYear (bit, null) CompanyID (FK, int, null) Instead of one FK for Company, I get two! How can I ensure I only get one FK for Company? Override the automapping? Put a convention in place? Or is this a bug? Your thoughts are appreciated.

    Read the article

  • Tomcat stops responding to JK requests

    - by Bruno Reis
    Hello. I have a nasty issue with load-balanced Tomcat servers that are hanging up. Any help would be greatly appreciated. The system I'm running Tomcat 6.0.26 on HotSpot Server 14.3-b01 (Java 1.6.0_17-b04) on three servers sitting behind another server that acts as load balancer. The load balancer runs Apache (2.2.8-1) + MOD_JK (1.2.25). All of the servers are running Ubuntu 8.04. The Tomcat's have 2 connectors configured: an AJP one, and a HTTP one. The AJP is to be used with the load balancer, while the HTTP is used by the dev team to directly connect to a chosen server (if we have a reason to do so). I have Lambda Probe 1.7b installed on the Tomcat servers to help me diagnose and fix the problem soon to be described. The problem Here's the problem: after about 1 day the application servers are up, JK Status Manager starts reporting status ERR for, say, Tomcat2. It will simply get stuck on this state, and the only fix I've found so far is to ssh the box and restart Tomcat. I must also mention that JK Status Manager takes a lot longer to refresh when there's a Tomcat server in this state. Finally, the "Busy" count of the stuck Tomcat on JK Status Manager is always high, and won't go down per se -- I must restart the Tomcat server, wait, then reset the worker on JK. Analysis Since I have 2 connectors on each Tomcat (AJP and HTTP), I still can connect to the application through the HTTP one. The application works just fine like this, very, very fast. That is perfectly normal, since I'm the only one using this server (as JK stopped delegating requests to this Tomcat). To try to better understand the problem, I've taken a thread dump from a Tomcat which is not responding anymore, and from another one that has been restarted recently (say, 1 hour before). The instance that is responding normally to JK shows most of the TP-ProcessorXXX threads in "Runnable" state, with the following stack trace: java.net.SocketInputStream.socketRead0 ( native code ) java.net.SocketInputStream.read ( SocketInputStream.java:129 ) java.io.BufferedInputStream.fill ( BufferedInputStream.java:218 ) java.io.BufferedInputStream.read1 ( BufferedInputStream.java:258 ) java.io.BufferedInputStream.read ( BufferedInputStream.java:317 ) org.apache.jk.common.ChannelSocket.read ( ChannelSocket.java:621 ) org.apache.jk.common.ChannelSocket.receive ( ChannelSocket.java:559 ) org.apache.jk.common.ChannelSocket.processConnection ( ChannelSocket.java:686 ) org.apache.jk.common.ChannelSocket$SocketConnection.runIt ( ChannelSocket.java:891 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:690 ) java.lang.Thread.run ( Thread.java:619 ) The instance that is stuck show most (all?) of the TP-ProcessorXXX threads in "Waiting" state. These have the following stack trace: java.lang.Object.wait ( native code ) java.lang.Object.wait ( Object.java:485 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:662 ) java.lang.Thread.run ( Thread.java:619 ) I don't know of the internals of Tomcat, but I would infer that the "Waiting" threads are simply threads sitting on a thread pool. So, if they are threads waiting inside of a thread pool, why wouldn't Tomcat put them to work on processing requests from JK? Solution? So, as I've stated before, the only fix I've found is to stop the Tomcat instance, stop the JK worker, wait the latter's busy count slowly go down, start Tomcat again, and enable the JK worker once again. What is causing this problem? How should I further investigate it? What can I do to solve it? Thanks in advance.

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

    - by Kevin Donde
    We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ... MsBuild Version .NET 4.0 MsBuild Build File ./WebProjectFolder/WebProject.csproj Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project. If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process. Options ... Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project. Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package. Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it. Problems ... There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ... This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements. This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc. Initial problem. We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ... We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...

    Read the article

  • C++ Program Always Crashes While doing a std::string assign

    - by bbazso
    I have been trying to debug a crash in my application that crashes (i.e. asserts a * glibc detected free(): invalid pointer: 0x000000000070f0c0 **) while I'm trying to do a simple assign to a string. Note that I'm compiling on a linux system with gcc 4.2.4 with an optimization level set to -O2. With -O0 the application no longer crashes. E.g. std::string abc; abc = "testString"; but if I changed the code as follows it no longer crashes std::string abc("testString"); So again I scratched my head! But the interesting pattern was that the crash moved later on in the application, AGAIN at another string. I found it weird that the application was continuously crashing on a string assign. A typical crash backtrace would look as follows: #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #1 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #2 0x00000000004d8cb7 in people_streamingserver_sighandler (signum=6) at src/peoplestreamingserver.cpp:487 #3 <signal handler called> #4 0x00007f2c2663bfb5 in raise () from /lib64/libc.so.6 #5 0x00007f2c2663dbc3 in abort () from /lib64/libc.so.6 #6 0x00007f2c26680ce0 in ?? () from /lib64/libc.so.6 #7 0x00007f2c270ca7a0 in std::string::assign (this=0x7f2c21bc8d20, __str=<value optimized out>) at /home/bbazso/ThirdParty/sources/gcc-4.2.4/x86_64-pc-linux-gnu/libstdc++-v3/include/bits/basic_string.h:238 #8 0x00007f2c21bd874a in PEOPLESProtocol::GetStreamName (this=<value optimized out>, pRawPath=0x2342fd8 "rtmp://127.0.0.1/mp4:pop.mp4", lStreamName=@0x7f2c21bc8d20) at /opt/trx-HEAD/gcc/4.2.4/lib/gcc/x86_64-pc-linux-gnu/4.2.4/../../../../include/c++/4.2.4/bits/basic_string.h:491 #9 0x00007f2c21bd9daa in PEOPLESProtocol::SignalProtocolCreated (pProtocol=0x233a4e0, customParameters=@0x7f2c21bc8de0) at peoplestreamer/src/peoplesprotocol.cpp:240 This was really weird behavior and so I started to poke around further in my application to see if there was some sort of memory corruption (either heap or stack) error that could be occurring that could be causing this weird behavior. I even checked for ptr corruptions and came up empty handed. In addition to visual inspection of the code I also tried the following tools: Valgrind using both memcheck and exp-ptrcheck electric fence libsafe I compiled with -fstack-protector-all in gcc I tried MALLOC_CHECK_ set to 2 I ran my code through lint checks as well as cppcheck (to check for mistakes) And I stepped through the code using gdb So I tried a lot of stuff and still came up empty handed. So I was wondering if it could be something like a linker issue or a library issue of some sort that could be causing this problem. Are there any know issues with the std::string that make is susceptible to crashing in -O2 or maybe it has nothing to do with the optimization level? But the only pattern that I can see thus far in my problem is that it always seems to crash on a string and so I was wondering if anyone knew of any issues that my be causing this type of behavior. Thanks a lot!

    Read the article

  • Is this a right way to use NHibernate?

    - by Venemo
    I spent the rest of the evening reading StackOverflow questions and also some blog entries and links about the subject. All of them turned out to be very helpful, but I still feel that they don't really answer my question. So, I'm developing a simple web application. I'd like to create a reusable data access layer which I can later reuse in other solutions. 99% of these will be web applications. This seems to be a good excuse for me to learn NHibernate and some of the patterns around it. My goals are the following: I don't want the business logic layer to know ANYTHING about the inner workings of the database, nor NHibernate itself. I want the business logic layer to have the least possible number of assumptions about the data access layer. I want the data access layer as simplistic and easy-to-use as possible. This is going to be a simple project, so I don't want to overcomplicate anything. I want the data access layer to be as non-intrusive as possible. Will all this in mind, I decided to use the popular repository pattern. I read about this subject on this site and on various dev blogs, and I heard some stuff about the unit of work pattern. I also looked around and checked out various implementations. (Including FubuMVC contrib, and SharpArchitecture, and stuff on some blogs.) I found out that most of these operate with the same principle: They create a "unit of work" which is instantiated when a repository is instantiated, they start a transaction, do stuff, and commit, and then start all over again. So, only one ISession per Repository and that's it. Then the client code needs to instantiate a repository, do stuff with it, and then dispose. This usage pattern doesn't meet my need of being as simplistic as possible, so I began thinking about something else. I found out that NHibernate already has something which makes custom "unit of work" implementations unnecessary, and that is the CurrentSessionContext class. If I configure the session context correctly, and do the clean up when necessary, I'm good to go. So, I came up with this: I have a static class called NHibernateHelper. Firstly, it has a static property called CurrentSessionFactory, which upon first call, instantiates a session factory and stores it in a static field. (One ISessionFactory per one AppDomain is good enough.) Then, more importantly, it has a CurrentSession static property, which checks if there is an ISession bound to the current session context, and if not, creates one, and binds it, and it returns with the ISession bound to the current session context. Because it will be used mostly with WebSessionContext (so, one ISession per HttpRequest, although for the unit tests, I configured ThreadStaticSessionContext), it should work seamlessly. And after creating and binding an ISession, it hooks an event handler to the HttpContext.Current.ApplicationInstance.EndRequest event, which takes care of cleaning up the ISession after the request ends. (Of course, it only does this if it is really running in a web environment.) So, with all this set up, the NHibernateHelper will always be able to return a valid ISession, so there is no need to instantiate a Repository instance for the "unit of work" to operate properly. Instead, the Repository is a static class which operates with the ISession from the NHibernateHelper.CurrentSession property, and exposes some functionality through that. I'm curious, what do you think about this? Is it a valid way of thinking, or am I completely off track here?

    Read the article

  • Help with Silverlight Sockets and Message delivery

    - by pixel3cs
    There are 4 months since I stopped developing my Silverlight Multiplayer Chess game. The problem was a bug wich I couldn't reproduce. Sice I got some free time this week I managed to discover the problem and I am now able to reproduce the bug. It seems that if I send 10 messages from client, one after another, with no delay between them, just like in the below example // when I press Enter, the client will 10 messages with no delay between them private void textBox_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter && textBox.Text.Length > 0) { for (int i = 0; i < 10; i++) { MessageBuilder mb = new MessageBuilder(); mb.Writer.Write((byte)GameCommands.NewChatMessageInTable); mb.Writer.Write(string.Format("{0}{2}: {1}", ClientVars.PlayerNickname, textBox.Text, i)); SendChatMessageEvent(mb.GetMessage()); //System.Threading.Thread.Sleep(100); } textBox.Text = string.Empty; } } // the method used by client to send a message to server public void SendData(Message message) { if (socket.Connected) { SocketAsyncEventArgs myMsg = new SocketAsyncEventArgs(); myMsg.RemoteEndPoint = socket.RemoteEndPoint; byte[] buffer = message.Buffer; myMsg.SetBuffer(buffer, 0, buffer.Length); socket.SendAsync(myMsg); } else { string err = "Server does not respond. You are disconnected."; socket.Close(); uiContext.Post(this.uiClient.ProcessOnErrorData, err); } } // the method used by server to receive data from client private void OnDataReceived(IAsyncResult async) { ClientSocketPacket client = async.AsyncState as ClientSocketPacket; int count = 0; try { if (client.Socket.Connected) count = client.Socket.EndReceive(async); // THE PROBLEM IS HERE // IF SERVER WAS RECEIVE ALL MESSAGES SEPARATELY, ONE BY ONE, THE COUNT // WAS ALWAYS 15, BUT BECAUSE THE SERVER RECEIVE 3 MESSAGES IN 1, THE COUNT // IS SOMETIME 45 } catch { HandleException(client); } client.MessageStream.Write(client.Buffer, 0, count); Message message; while (client.MessageStream.Read(out message)) { message.Tag = client; ThreadPool.QueueUserWorkItem(new WaitCallback(this.processingThreadEvent.ServerGotData), message); totalReceivedBytes += message.Buffer.Length; } try { if (client.Socket.Connected) client.Socket.BeginReceive(client.Buffer, 0, client.Buffer.Length, 0, new AsyncCallback(OnDataReceived), client); } catch { HandleException(client); } } there are sent only 3 big messages, and every big message contain 3 or 4 small messages. This is not the behavior I want. If I put a 100 milliseconds delay between message delivery, everything is work fine, but in a real world scenario users can send messages to server even at 1 millisecond between them. Are there any settings to be done in order to make the client send only one message at a time, or Even if I receive 3 messages in 1, are they full messages all the time (I dont't want to receive 2.5 messages in one big message) ? because if they are, I can read them and treat this new situation

    Read the article

  • Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • [C#][Design] Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • What is the best way to solve an Objective-C namespace collision?

    - by Mecki
    Objective-C has no namespaces; it's much like C, everything is within one global namespace. Common practice is to prefix classes with initials, e.g. if you are working at IBM, you could prefix them with "IBM"; if you work for Microsoft, you could use "MS"; and so on. Sometimes the initials refer to the project, e.g. Adium prefixes classes with "AI" (as there is no company behind it of that you could take the initials). Apple prefixes classes with NS and says this prefix is reserved for Apple only. So far so well. But appending 2 to 4 letters to a class name in front is a very, very limited namespace. E.g. MS or AI could have an entirely different meanings (AI could be Artificial Intelligence for example) and some other developer might decide to use them and create an equally named class. Bang, namespace collision. Okay, if this is a collision between one of your own classes and one of an external framework you are using, you can easily change the naming of your class, no big deal. But what if you use two external frameworks, both frameworks that you don't have the source to and that you can't change? Your application links with both of them and you get name conflicts. How would you go about solving these? What is the best way to work around them in such a way that you can still use both classes? In C you can work around these by not linking directly to the library, instead you load the library at runtime, using dlopen(), then find the symbol you are looking for using dlsym() and assign it to a global symbol (that you can name any way you like) and then access it through this global symbol. E.g. if you have a conflict because some C library has a function named open(), you could define a variable named myOpen and have it point to the open() function of the library, thus when you want to use the system open(), you just use open() and when you want to use the other one, you access it via the myOpen identifier. Is something similar possible in Objective-C and if not, is there any other clever, tricky solution you can use resolve namespace conflicts? Any ideas? Update: Just to clarify this: answers that suggest how to avoid namespace collisions in advance or how to create a better namespace are certainly welcome; however, I will not accept them as the answer since they don't solve my problem. I have two libraries and their class names collide. I can't change them; I don't have the source of either one. The collision is already there and tips on how it could have been avoided in advance won't help anymore. I can forward them to the developers of these frameworks and hope they choose a better namespace in the future, but for the time being I'm searching a solution to work with the frameworks right now within a single application. Any solutions to make this possible?

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • How can I provide maximum integration between a calendar-like webapp and desktop calendar applicatio

    - by Joshua Carmody
    I've been assigned to upgrade/rewrite a webapp that my company uses to schedule conference calls. One of the goals of the upgrade is to improve integration between the application and our user's Outlook calendars (and ideally other calendar programs as well). At present, when a user is viewing the details of a scheduled conference call on the webapp, they can click an "Add to Outlook calendar" link, which points them to a dynamically generated .ical file. On most of our users' systems, Outlook opens the file by default, bringing up the "create calendar appointment" window with the concall information pre-populated. This link creates a 1-time appointment only, and has to be clicked on for each occurrence of the call. So if a call happened every Monday in June, you would have to click 4 links to add all the appointments to your calendar. This is the full extent of our current level of integration. Ideally, we will be able to upgrade the system so that users can "subscribe" to a con call, which would mean not just the current call, but all calls in a reoccurring series would appear in the user's calendar with a single click. If one call in a series was cancelled, or rescheduled, that call's appointment would change in the users' calendar, without the user having to do anything, and without upsetting the rest of the series' appointments. Also, any changes to the call's info (say, the phone number was changed) would automatically be updated in the Outlook calendars of anyone who subscribed, without them having to come back to the webapp to double-check that their information is up to date. Ideally this would also work with other popular calendar programs, as well as Google Calendar. I don't know if we'll be able to achieve that level of integration, but I'd like to get as close to that as we can. Additional details and challenges: We aren't running Exchange on a public server, and I'm not likely to be able to get that changed Assume that our users are basically "the general internet public". Our users are not members of our office's network, nor can they be. We can't set up network logins or Exchange accounts for them. Some of our users are not using Outlook, but some other calendar program. Of the ones that are using Outlook, not all are using the same version. We have users in more than 50 countries that are using this webapp. Synchronization would be one-directional. Nobody can make changes in their own calendars and expect the server to reflect them/replicate them to other users Current conference calling application is written in ColdFusion. Rewrite will probably be in ASP.NET, but I haven't confirmed that yet. Solutions that work with either or both technologies are appreciated. I know that .ical files can theoretically contain more than one event, but in my own experiments I haven't had success in getting Outlook (2003) to add more than one event at a time using the .ical file method. Maybe someone knows how to set up a multi-event .ical file that Outlook will accept? Could a link to such an .ical file be "subscribed" to? Is there such thing as a calendar RSS feed? Could I simulate running an exchange server? Any other ideas? Thanks everyone!

    Read the article

  • C#: Parallel forms, multithreading and "applications in application"

    - by Harry
    First, what I need is - n WebBrowser-s, each in its own window doing its own job. The user should be able to see them all, or just one of them (or none), and to execute commands on each one. There is a main form, without a browser, this one contains control panel for my application. The key feautre is, each browser logs on to secured web page and it needs to stay logged in as long as possible. Well, I've done it, but I'm afraid something is wrong with my approach. The question is: Is code below valid, or rather a nasty hack which can cause problems: internal class SessionList : List<Session> { public SessionList(Server main) { MyRecords.ForEach(record => { var st = new System.Threading.Thread((data) => { var s = new Session(main, data as MyRecord); this.Add(s); Application.Run(s); Application.ExitThread(); }); st.SetApartmentState(System.Threading.ApartmentState.STA); st.Start(record); }); } // some other uninteresting methods here... } What's going on here? Session inherits from Form, so it creates a form, puts WebBrowser into it, and has methods to operate on websites. WebBrowser requires to be run in STA thread, so we provide one for each browser. The most interesting part of it is Application.Run(s). It makes the newly created forms alive and interactive. The next Application.ExitThread() is called after browser window is closed and its controls disposed. Main application stays alive to perform the rest of the cleanup job. When user select "Exit" or "Shutdown" option - first the browser threads are ended, so Application.ExitThread() is called. It all works, but everywhere I can read about "main GUI thread" - and here - I've created many GUI threads. I handle communication between main form and my new forms (sessions) with thread-safe methods using Invoke(). It all works, so is it right or is it wrong? Is everything right with using Application.Run() more than once in one application? :) An ugly hack or a normal practice? This code dies if I start a WebBrowser from the session form thread. It beats me why. It works however if I start WebBrowser (by changing its Url property) from any other thread. I'd like to know more what is really happening in such application. But most of all - I'd like to know if my idea of "applications in application" is OK. I'm not sure what exactly does Application.Run() do. Without it forms created in new threads were dead unresponsive. How is it possible I can call Application.Run() many times? It seems to do exactly what it should, but it seems a little undocumented feature to me. I'm almost sure, that the crashes are caused by WebBrowser component itself (since it's not completely "managed" and "native"). But maybe it's something else.

    Read the article

  • Android inserting into SQLite JSON object

    - by Nizmon
    I'm trying to insert the below json response from a server into my sqlite DB and then read from the DB. The problem I am getting is that when I run my code it compiles fine and runs with no errors. But when trying to read from the DB I just get back what seems like an empty string so I know that the table is being created. I have the correct permissions within the manifest. I have also reference all my classes within there. {"locations": [{"locations":"Newcastle","location_id":"1"},{"locations":"London","location_id":"2"},{"locations":"Sunderland","location_id":"3"}]} Below where I use: Log.v("one", one); Log.v("two", two); below It only prints out the first set within the JSON object so Newcastle and 1. I don't get any errors at all which is stumping me. When I call the method getName within the Location class I just seem to get a blank string back! // This class creates the table as well as inserts and returns data from the sqlite DB public class Location { private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private final Context mCtx; private static final String vd_location = ("CREATE TABLE " + TABLE_VD_LOCATION + " (" + LOCATION + " TEXT," + LOCATION_ID + " TEXT " +");"); private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(vd_location); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { } } public Location(Context ctx) { this.mCtx = ctx; } public Location open() throws SQLException { this.mDbHelper = new DatabaseHelper(this.mCtx); this.mDb = this.mDbHelper.getWritableDatabase(); return this; } public void close() { this.mDbHelper.close(); } public void addOffer (JSONObject json){ try{ JSONArray earthquakes = json.getJSONArray("offers"); for(int i=0;i<earthquakes.length();i++){ open(); JSONObject e = earthquakes.getJSONObject(i); String one = e.getString("locations"); String two = e.getString("location_id"); Log.v("one", one); Log.v("two", two); mDb.execSQL("INSERT INTO " + TABLE_VD_LOCATION + " ( locations, location_id ) " + "VALUES (?, ?)", new Object [] { e.getString("locations"), e.getString("location_id")}); close(); } }catch(Exception e){ }finally{ close(); } } public String getName(long l) throws SQLException{ String[] columns = new String[]{ LOCATION, LOCATION_ID}; Cursor c = mDb.query(TABLE_VD_LOCATION, columns, null, null, null, null, null); String result = ""; int iRow = c.getColumnIndex(LOCATION); int iName = c.getColumnIndex(LOCATION_ID); for (c.moveToFirst(); !c.isAfterLast(); c.moveToNext()){ result = result + c.getString(iRow) + " " + c.getString(iName) + " " + "\n"; } return result; } } // This code reads from the DB, this is just some very hacked together code so excuse it, it also works when used on other tables public void getData(){ boolean didItWork = true; try { Location loc = new Location(this); loc.open(); String data = loc.getName(0); loc.close(); Dialog t = new Dialog(this); t.setTitle("get" + data); t.show(); } catch (Exception e) { didItWork = false; String error = e.toString(); Dialog d = new Dialog(this); d.setTitle("Dang it!"); TextView tv = new TextView(this); tv.setText(error); d.setContentView(tv); d.show(); } finally { if (didItWork) { Dialog d = new Dialog(this); d.setTitle("Heck Yea!"); TextView tv = new TextView(this); tv.setText("Success"); d.setContentView(tv); d.show(); //entry.close(); } } }

    Read the article

  • can a python script know that another instance of the same script is running... and then talk to it?

    - by Justin Grant
    I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original insance before the new instance commits suicide. How can I do this in a cross-platform way? Specifically, I'd like to enable the following behavior: "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it. every few minutes the same script is launched again, but with different command-line parameters when launched, the script should see if any other instances are running. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform. So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another? Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible. I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option. More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them. This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing. But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

    Read the article

  • What is the best free or low-cost Java reporting library (e.g. BIRT, JasperReports, etc.) for making

    - by Max3000
    I want to print, email and write to PDF very simple reports. The reports are basically a list of items, divided in various sections/columns. The sections are not necessarily identical. Think newspaper. I just wasted a solid 2 days of work trying to make this kind of reports using JasperReports. I find that Jasper is great for outputing "normalized" data. The kind that would come out of a database for instance, each row neatly describing an item and each item printed on a line. I'm simplifying a bit but that's the idea. However, given what I want to do I always ended up completely lost. Data not being displayed for no apparent reason, columns of texts never the correct size, column positioning always ending up incorrect, pagination not sanely possible (I was never able to figure it out; the FAQ gives an obscure workaround), etc. I came to the conclusion that Jasper is really not built to make the kind of reports I want. Am I missing something? I'm ready to pay for a tool, as long as the price is reasonable. By reasonable I mean a few $100s. Thanks. EDIT: To answer cetus, here is more information about the report I made in Jasper. What I want is something like this: text text text text ------------------- text | text text |---------- text | text text | text --------| text text |---------- text | text What I made in jasper is this: (detail band) subreport | subreport ------------------------------------ subreport | subreport ------------------------------------ subreport | subreport The subreports are all the same actual report. This report has one field (called "field") and basically just prints this field in a detail band. Hence, running a single subreport simply lists all items from the datasource. The datasource itself is a simple custom JRDatasource containing a collection of strings in the field "field". The datasource iterates over the collection until there are no more strings. Each subreport has its own datasource. I tried many different variations of the above, with all sorts of different properties for the report, subreports, etc. IMO, this is fairly simple stuff. However, the problems I encounter are as follows: Subreports starting from the 3rd don't show up when their position type is 'float'. They do show up when they have 'fix relative to top'. However, I don't want to do this because the first two subreports can be of any length. I can't make each subreport to stretch according to its own length. Instead, they either don't stretch at all (which is not desirable because they have different lenghts) or they stretch according to the longest subreport. This makes a weird layout for sure. Pagination doesn't happen. If some subreports fall outside the page, they simple don't show. One alternative is to increase the 'page height' considerably and the 'detail band height' accordingly. However, in this case it is not really possibly to know the total height in advance. So I'm stuck with calculating/guessing it myself, before the report is even generated. More importantly, long reports end up on one page and this is not acceptable (the printout text is too small, it's ugly/non-professional to have different reports with different PDF page lengths, etc.). BTW, I used iReport so it's possibly limitations of iReport I'm listing here and not of Jasper itself. That's one of the things I'm trying to find out asking this question here. One alternative would be to generate the jrxml myself with just static text but I'm afraid I'll encounter the very same limitations. Anyway, I just generally wasted so much time getting anything done with Jasper that I can't help thinking its not the right tool for the job. (Not to say that Jasper doesn't excel in what it's good at).

    Read the article

  • Please clarify how create/update happens against child entities of an aggregate root

    - by christian
    After much reading and thinking as I begin to get my head wrapped around DDD, I am a bit confused about the best practices for dealing with complex hierarchies under an aggregate root. I think this is a FAQ but after reading countless examples and discussions, no one is quite talking about the issue I'm seeing. If I am aligned with the DDD thinking, entities below the aggregate root should be immutable. This is the crux of my trouble, so if that isn't correct, that is why I'm lost. Here is a fabricated example...hope it holds enough water to discuss. Consider an automobile insurance policy (I'm not in insurance, but this matches the language I hear when on the phone w/ my insurance company). Policy is clearly an entity. Within the policy, let's say we have Auto. Auto, for the sake of this example, only exists within a policy (maybe you could transfer an Auto to another policy, so this is potential for an aggregate as well, which changes Policy...but assume it simpler than that for now). Since an Auto cannot exist without a Policy, I think it should be an Entity but not a root. So Policy in this case is an aggregate root. Now, to create a Policy, let's assume it has to have at least one auto. This is where I get frustrated. Assume Auto is fairly complex, including many fields and maybe a child for where it is garaged (a Location). If I understand correctly, a "create Policy" constructor/factory would have to take as input an Auto or be restricted via a builder to not be created without this Auto. And the Auto's creation, since it is an entity, can't be done beforehand (because it is immutable? maybe this is just an incorrect interpretation). So you don't get to say new Auto and then setX, setY, add(Z). If Auto is more than somewhat trivial, you end up having to build a huge hierarchy of builders and such to try to manage creating an Auto within the context of the Policy. One more twist to this is later, after the Policy is created and one wishes to add another Auto...or update an existing Auto. Clearly, the Policy controls this...fine...but Policy.addAuto() won't quite fly because one can't just pass in a new Auto (right!?). Examples say things like Policy.addAuto(VIN, make, model, etc.) but are all so simple that that looks reasonable. But if this factory method approach falls apart with too many parameters (the entire Auto interface, conceivably) I need a solution. From that point in my thinking, I'm realizing that having a transient reference to an entity is OK. So, maybe it is fine to have a entity created outside of its parent within the aggregate in a transient environment, so maybe it is OK to say something like: auto = AutoFactory.createAuto(); auto.setX auto.setY or if sticking to immutability, AutoBuilder.new().setX().setY().build() and then have it get sorted out when you say Policy.addAuto(auto) This insurance example gets more interesting if you add Events, such as an Accident with its PolicyReports or RepairEstimates...some value objects but most entities that are all really meaningless outside the policy...at least for my simple example. The lifecycle of Policy with its growing hierarchy over time seems the fundamental picture I must draw before really starting to dig in...and it is more the factory concept or how the child entities get built/attached to an aggregate root that I haven't seen a solid example of. I think I'm close. Hope this is clear and not just a repeat FAQ that has answers all over the place.

    Read the article

  • Which of CouchDB or MongoDB suits my needs?

    - by vonconrad
    Where I work, we use Ruby on Rails to create both backend and frontend applications. Usually, these applications interact with the same MySQL database. It works great for a majority of our data, but we have one situation which I would like to move to a NoSQL environment. We have clients, and our clients have what we call "inventories"--one or more of them. An inventory can have many thousands of items. This is currently done through two relational database tables, inventories and inventory_items. The problems start when two different inventories have different parameters: # Inventory item from inventory 1, televisions { inventory_id: 1 sku: 12345 name: Samsung LCD 40 inches model: 582903-4 brand: Samsung screen_size: 40 type: LCD price: 999.95 } # Inventory item from inventory 2, accomodation { inventory_id: 2 sku: 48cab23fa name: New York Hilton accomodation_type: hotel star_rating: 5 price_per_night: 395 } Since we obviously can't use brand or star_rating as the column name in inventory_items, our solution so far has been to use generic column names such as text_a, text_b, float_a, int_a, etc, and introduce a third table, inventory_schemas. The tables now look like this: # Inventory schema for inventory 1, televisions { inventory_id: 1 int_a: sku text_a: name text_b: model text_c: brand int_b: screen_size text_d: type float_a: price } # Inventory item from inventory 1, televisions { inventory_id: 1 int_a: 12345 text_a: Samsung LCD 40 inches text_b: 582903-4 text_c: Samsung int_a: 40 text_d: LCD float_a: 999.95 } This has worked well... up to a point. It's clunky, it's unintuitive and it lacks scalability. We have to devote resources to set up inventory schemas. Using separate tables is not an option. Enter NoSQL. With it, we could let each and every item have their own parameters and still store them together. From the research I've done, it certainly seems like a great alterative for this situation. Specifically, I've looked at CouchDB and MongoDB. Both look great. However, there are a few other bits and pieces we need to be able to do with our inventory: We need to be able to select items from only one (or several) inventories. We need to be able to filter items based on its parameters (eg. get all items from inventory 2 where type is 'hotel'). We need to be able to group items based on parameters (eg. get the lowest price from items in inventory 1 where brand is 'Samsung'). We need to (potentially) be able to retrieve thousands of items at a time. We need to be able to access the data from multiple applications; both backend (to process data) and frontend (to display data). Rapid bulk insertion is desired, though not required. Based on the structure, and the requirements, are either CouchDB or MongoDB suitable for us? If so, which one will be the best fit? Thanks for reading, and thanks in advance for answers. EDIT: One of the reasons I like CouchDB is that it would be possible for us in the frontend application to request data via JavaScript directly from the server after page load, and display the results without having to use any backend code whatsoever. This would lead to better page load and less server strain, as the fetching/processing of the data would be done client-side.

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >