Search Results

Search found 9343 results on 374 pages for 'generation d systems'.

Page 65/374 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Configuring JPA Primary key sequence generators

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes the JPA feature of generating and assigning the unique sequence numbers to JPA entity .This article provides information on jpa sequence generator annotations and its usage. UseCase Description Adding a new Employee to the organization using Employee form should assign unique employee Id. Following description provides the detailed steps to implement the generation of unique employee numbers using JPA generators feature Steps to configure JPA Generators 1.Generate Employee Entity using "Entities from Table Wizard". View image2.Create a Database Connection and select the table "Employee" for which entity will be generated and Finish the wizards with default selections. View image 3.Select the offline database sources-Schema-create a Sequence object or you can copy to offline db from online database connection. View image 4.Open the persistence.xml in application navigator and select the Entity "Employee" in structure view and select the tab "Generators" in flat editor. 5.In the Sequence Generator section,enter name of sequence "InvSeq" and select the sequence from drop down list created in step3. View image 6.Expand the Employees in structure view and select EmployeeId and select the "Primary Key Generation" tab.7.In the Generated value section,select the "Use Generated value" check box ,select the strategy as "Sequence" and select the Generator as "InvSeq" defined step 4. View image   Following annotations gets added for the JPA generator configured in JDeveloper for an entity To use a specific named sequence object (whether it is generated by schema generation or already exists in the database) you must define a sequence generator using a @SequenceGenerator annotation. Provide a unique label as the name for the sequence generator and refer the name in the @GeneratedValue annotation along with generation strategy  For  example,see the below Employee Entity sample code configured for sequence generation. EMPLOYEE_ID is the primary key and is configured for auto generation of sequence numbers. EMPLOYEE_SEQ is the sequence object exist in database.This sequence is configured for generating the sequence numbers and assign the value as primary key to Employee_id column in Employee table. @SequenceGenerator(name="InvSeq", sequenceName = "EMPLOYEE_SEQ")   @Entity public class Employee implements Serializable {    @Id    @Column(name="EMPLOYEE_ID", nullable = false)    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="InvSeq")   private Long employeeId; }   @SequenceGenerator @GeneratedValue @SequenceGenerator - will define the sequence generator based on a  database sequence object Usage: @SequenceGenerator(name="SequenceGenerator", sequenceName = "EMPLOYEE_SEQ") @GeneratedValue - Will define the generation strategy and refers the sequence generator  Usage:     @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="name of the Sequence generator defined in @SequenceGenerator")

    Read the article

  • Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly

    - by Eyla
    Greeting, I have Windows Server 2008 Server Core. I want to configure this server to host php websites using IIS 7. I installed and configured IIS7 to run php using the steps in this website: http://blogs.msdn.com/b/philpenn/archive/2009/07/19/deploying-iis-7-5-fastcgi-php-on-server-core.aspx Now I’m facing a problem that when I request my php website I would get this error. Server Error 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. I check the even log and I found these details too: Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8" could not be found. Please use sxstrace.exe for detailed diagnosis. I search about this error and I found a solution for it but which is to install Microsoft Visual C++ 2008 SP1 Redistributable Package (x86). I installed but still I’m getting same error. Please help me to solve this problem and let me know if you want to know more info about my issue.

    Read the article

  • Visual Studio 2008 Generation of Designer File Failed (Cannot use a leading .. to exit above the top

    - by tardomatic
    Hi, It seems the universe is against me this week. I have been happily coding away on my ASP.Net application for weeks now without issues. Today I tried to add a textbox to a form, and on saving the form, I received the following error: Generation of designer file failed: Cannot use a leading .. to exit above the top directory I googled, but with no luck. I did find a blog post that shows how to add a key into the registry so that Visual Studio logs more detail about these errors, and the following is what shows up in the generated log file: ------------------------------------------------------------- C:\[path to aspx file]\PageName.aspx Generation of designer file failed: Cannot use a leading .. to exit above the top directory. ------------------------------------------------------------- System.Web.HttpException: Cannot use a leading .. to exit above the top directory. at System.Web.Util.UrlPath.ReduceVirtualPath(String path) at System.Web.Util.UrlPath.Reduce(String path) at System.Web.Util.UrlPath.Combine(String appPath, String basepath, String relative) at System.Web.VirtualPath.Combine(VirtualPath relativePath) at System.Web.VirtualPath.Combine(VirtualPath v1, VirtualPath v2) at System.Web.VirtualPathUtility.Combine(String basePath, String relativePath) at Microsoft.VisualStudio.Web.Application.Parser.BeginParse(String virtualPath, String text) at Microsoft.VisualStudio.Web.Application.Generator.UpdateDesignerClass(String document, String codeBehind, String codeBehindFile, String[] publicFields, UDC_Flags flags) ------------------------------------------------------------- And, of course this means that there is no way I can reference the newly added text box from the code behind. I thought it might be just this page giving the issue, but I have tried three other pages with the same result. I haven't changed the environment for weeks, so I am not sure how this happens. Any ideas out there? Thanks in advance Hamish

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • What .NET reporting tools are best for dynamic report generation?

    - by bvanderw
    Perhaps I need to define "dynamic generation". By this I mean using graphics primitives to draw on the page (such as DrawText or DrawLine, etc) This is what System.Drawing.Printing provides. I often need to create forms and reports for Windows applications that either require dynamic generation or where I need control over the formatting that stretches or goes beyond the capabilities of most report designers. Essentially, I need to ability to create my own pages using graphics primitives like you can do with System.Drawing.Printing that are part of package that also provides a report designer, exporting to PDF, etc. In my Delphi days, I used Rave Reports (along with the exporting add-ons from Gnostice) because it was the only Delphi reporting tool that gave you that kind of fine control. I've been struggling with the reporting tools provided by Developer Express and I have given up trying to make them do what I need to do. I downloaded a trial of ActiveReports and was able to completely create one of my dynamic reports (using their Page class) in a few hours one afternoon. It's likely I will buy their product, but it's a bit frustrating to have to do so after investing in the Developer Express tools. Before I do so, are there any other products that offer this functionality that I should investigate? As far as I can tell, Crystal Reports does not - is this correct? Thanks.... --Bruce

    Read the article

  • JAXB code generation: how to remove a zero occurrence field?

    - by reef
    Hi all, I use JAXB 2.1 to generate Java classes from several XSD files, and I have a problem related to complex type restriction. On of the restrictions modifies the occurence configuration from minOccurs="0" maxOccurs="unbounded" to minOccurs="0" maxOccurs="0". Thus this field is not needed anymore in the restricted type. But actually JAXB generates the restricted class with a [0..1] cardinality instead of 0. By the way the generation is tuned with <xjc:treatRestrictionLikeNewType / so that a XSD restriction is not mapped to a Java class inheritance. Here is an example: Here is the way a field is defined in a complex type A: <element name="qualifier" type="CR" maxOccurs="unbounded" minOccurs="0"/ Here is the way the same field is restricted in another complex type B that restricts A: <element name="qualifier" type="CR" minOccurs="0" maxOccurs="0"/ In the A generated class I have: @XmlElement(name = "qualifier") protected List<CR qualifiers; And in the B generated class I have: protected CR qualifiers; With my poor understanding of JAXB the absence of the XmlElement annotation tells JAXB not to marshall/unmarshall this field. Am I wrong? If I am right is there a way to tell JAXB not to generate the qualifiers field at all? This would be in my opinion a much better generation as it respects the constraints. Any idea, thougths on the topic? Thanks!!

    Read the article

  • Does the Win XP/7 dual boot "missing restore points" problem apply to systems with separate hard disks for each O/S?

    - by Robert Oschler
    I'm in the process of installing Windows 7/64 on a system with Windows XP/32 on it. During my research, I read about a problem that occurs in the dual boot scenario where Windows XP deletes Windows 7's restore points when it accesses the Windows 7 volume: http://support.microsoft.com/kb/926185 I found a workaround but it seems pretty painful since it appears to involve using the registry to make the Windows 7 volume appear invisible or "offline" to Windows XP, making sharing disk data between the two O/S annoying since you have to use something like an external storage device to get it done: http://www.vistax64.com/tutorials/127417-system-restore-points-stop-xp-dual-boot-delete.html I was wondering if this problem only occurs with systems that have both O/S installed on the same physical hard drive (in different partitions)? In my case, I will have each O/S on a completely separate physical hard drive. Any other tips would be appreciated. -- roschler

    Read the article

  • What's the largest message size modern email systems generally support?

    - by Phil Hollenback
    I know that Yahoo and Google mail support 25MB email attachments. I have an idea from somewhere that 10MB email messages are generally supported by modern email systems. So if I'm sending an email between two arbitrary users on the internet, what's the safe upper bound on message size? 1MB? 10MB? 25MB? I know that one answer is 'don't send big emails, use some sort of drop box'. I'm looking for a guideline if you are limited to only using regular smtp email.

    Read the article

  • UEC - Can the Cluster Controller and Storage Controller be seperate systems?

    - by Jeremy Hajek
    My department is implementing an Ubuntu Enterprise Cloud. I have done the testing and am quite comfortable with the 4 pieces, CC/SC, CLC, WS, NC. Looking at various documents below it appears the the Storage Controller and Cluster Controller (eucalyptus-sc and eucalyptus-cc) are always installed on the same system. My question is this: can I install the storage controller and the cluster controller on separate systems? http://open.eucalyptus.com/wiki/EucalyptusAdvanced_v2.0 the picture indicates that cc and sc are two different machines http://www.canonical.com/sites/default/files/active/Whitepaper-UbuntuEnterpriseCloudArchitecture-v1.pdf P.10 1st paragraph uses the word "machine(s)" http://software.intel.com/file/31966 P. 8 indicates the same separate architecture BUT... https://help.ubuntu.com/community/UEC/PackageInstallSeparate indicates below that the SC and CC are to be on the same system.

    Read the article

  • rhel/centos vs. ubuntu (possibly other debian-based systems) linux in handling duplicate ips in the same subnet

    - by johnshen64
    This has bothered me for quite a while but I never found out why or how to change the behavior. ip duplicates could be caused by typos or dhcp errors etc., but they do occur from time to time. in rpm-based systems such as centos, the old server with the duplicate ip wins, and the new server will get an error in bringing up the nic (ip address already used). this is somewhat harmless because we can just fix the system that is coming up. ubuntu only the other hand happily grabs the used ip for itself and leave the old server/device without a valid ip. this is the more dangerous behavior because it causes outages. what i want is to change the ubuntu behavior to that of the centos/rhel so would appreciate any help.

    Read the article

  • What's the strengths and weaknesses of existing configuration management systems?

    - by Daniel C. Sobral
    I was looking up here for some comparisons between CFEngine, Puppet, Chef, bcfg2, AutomateIt and whatever other configuration management systems might be out there, and was very surprised I could find very little here on Server Fault. For instance, I only knew of the first three links above -- the other two I found on a related google search. So, I'm not interested in what people think is the best one, or which they like. I'd like to know the following: Configuration Management System's name. Why it was created (as opposed to using an existing solution). Relative strengths. Relative weaknesses. License. Link to project and examples.

    Read the article

  • Operating Systems supported by the Intel SR1435VP2 Server Platform?

    - by Xspence
    I recently had two Intel SR1435VP2 Servers (with SE7320VP2 server boards) donated to me from a colleague. Google hasn't yielded much more than user manuals when searching for OS-compatibility answers. I have worked with flavors of Linux such as Ubuntu and Debian, but Intel only documents that they have tested proprietary operating systems such as SuSE, Solaris and Red Hat as documented on their driver downloads page. Has anyone worked with these machines before, and if so, do you know if the SR1435VP2/E7320 chipset supports OS's such as CentOS, Debian or Ubuntu? If you need more information or clarification, let me know. This is all new for me. Thanks in advance.

    Read the article

  • Can I run virtualized 64-bit Operating Systems if my CPU doesn't support VT-X?

    - by tintinmj
    I have installed VMWare 10.0 workstation on my Compaq CQ60-615DX laptop. The Operating System is Windows 7 Home Premium. When I tried to run Ubuntu 14.04 64-bit in a virtual machine in VMWare I get an error saying: This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible. This host does not support Intel VT-x. For more detailed information, see http://vmware.com/info?id=152. So I googled and found that I have to enable Intel VT-x. But I found out that my processor doesn't support Intel® Virtualization Technology (VT-x). So am I doomed and can I never run any virtual OS on my laptop? Or can I run 32-bit OSes?

    Read the article

  • What's the strengths and weaknesses of existing configuration management systems?

    - by Daniel C. Sobral
    I was looking up here for some comparisons between CFEngine, Puppet, Chef, bcfg2, AutomateIt and whatever other configuration management systems might be out there, and was very surprised I could find very little here on Server Fault. For instance, I only knew of the first three links above -- the other two I found on a related google search. So, I'm not interested in what people think is the best one, or which they like. I'd like to know the following: Configuration Management System's name. Why it was created (as opposed to using an existing solution). Relative strengths. Relative weaknesses. License. Link to project and examples.

    Read the article

  • Why does casting to double using "String * 1" fail? Will CDbl(String) work on all systems?

    - by Jamie Bull
    I have an application which contains the line below to assign a parsed XML value to a variant array. V(2) = latNode.Text * 1 This works fine on my system (Windows 7, Excel 2010) but doesn't work on some other system or systems - and I've not been able to get a response from the user who reported the problem. I've switched out the offending line for: V(2) = CDbl(latNode.Text) This still works on my system, but then I had no problem in the first place. The question is on what systems does the first approach fail and why, and will the second method always work? I'm sure I've used the "Stying * 1" trick elsewhere before and would like to know how concerned I should be about tracking down other occurrences. Thanks.

    Read the article

  • Is it necessary to burn-in RAM for server-class systems?

    - by ewwhite
    When using server-class systems with ECC RAM, is it necessary or even useful to burn-in the memory DIMMs prior to deployment? I've encountered an environment where all server RAM is placed through a lengthy multi-day burn-in/stress-tesing process. This has delayed system deployments on occasion and adds an extra step to the hardware lead-time. The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant. Is this process useful? In my past experience, I simply used vendor RAM out of the box. Isn't that what the POST memory tests are for? I've encountered and responded to ECC errors long before a DIMM actually failed. The ECC thresholds were usually the trigger for warranty placement. Do you burn your RAM in? If so, what method do you use to perform the tests? Has the burn-in process resulted in any additional platform stability? Has it identified any pre-deployment problems?

    Read the article

  • How can I "share" a network share over the internet to multiple operating systems?

    - by Minsc
    Hello all, We have a network share accessible through our intranet that is widely used. This share has it's own set of fine tuned permissions. I have been tasked with allowing A.D. authenticated access to this share over the internet without the use of VPN. The internet access has to mimic the NTSF permissions in place on the share. Another piece of the puzzle is that the access over the internet has to allow perusal of the share from Windows and Mac OS systems. I had envisioned a web front end that would facilitate downloading to and uploading from the share via a web browser. I'm trying to ask for some suggestions about what type of setup is necessary to achieve this. I've done loads of testing and searching for solutions but I can't seem to get anything to work as I hope. The web server that will be handing all of this is a Windows 2K8 box with IIS 7. How can I allow the users to authenticate against Active Directory when coming from the internet even when coming from a Mac system? I hope my question is not too broad, I'm sorry if I should have broken it up into multiple questions. It all is just tied together in my head. Thank you all for your time and aid.

    Read the article

  • Which default Database Systems come installed in Microsoft VS2010 Express?

    - by Tonygts
    Appreciate all advice 0n the following questions Which database systems (Ms SQL 2008, MS SQL Compact, or others) comes installed with VS2010 Express edition. SQL Server 2008 R2 Express is free, can we install and integrate with VS2010 Express? How to uninstall those database already come installed? I have installed VS2010 express on Windows 7; just VS2010 components (VB, C#, C++ and Web Developer) and without installing any other things like SQL Express. In the Console Panel-Program & Features' window, the installed list is shown below: Microsoft SQL Server 2008 Setup Support File Microsoft SQL Server 2008 Browser Microsoft SQL Server VSS Writer Microsoft SQL Server Database Publishing Wizard 1.4 Microsoft ASP.NET MVC2 - VWD Express 2010 Tools Microsoft SQL Server 2008 Management Objects Microsoft SQL Server Compact 3.5 SP2 ENU Microsoft SQL Server System CLR Types Microsoft Silverlight 3 SDK Microsoft ASP.NET MVC 2 Microsoft Visual Studio 2010 ADO.NET Entity Framework Tools Visual Studio 2010 Tools doe SQL Server Compact 3.5 SP2 ENU Web Deployment Tool Microsoft Visual Web Developer 2010 Express - ENU Microsoft Visual C++ 2010 Express - ENU Microsoft Visual C# 2010 Express - ENU Microsoft Visual Visual Basic 2010 Express - ENU Microsoft SQL Server 2008 As you can see, Microsoft SQL Server 2008 (last line) and near the top, Microsoft SQL Server Compact 3.5 SP2 ENU and many of their related SQL components such as Microsoft SQL Server 2008 R2 Management Objects are also installed. These are actually installed by installing VS2010 Express, but I have no idea how to use them or verify their valid existence from VS2010. Also, do I have to uninstall them before I install SQL Server 2008 R2, which is the latest version I believe? And what tool is needed to manage and create data source and tables?

    Read the article

  • Made a .dmg for a project; user can't open it - "no mountable file systems"

    - by dragonridingsorceress
    Hello, We don't know a great deal about Macs. We had to make an installer, and were told to try a .dmg So we put together version 1, and it seemed to work. We had one application file, which had our icon, and one folder. The user was instructed to drag these into the Applications folder, of which there was the Mac version of a shortcut in the dmg. Then we were told we needed to update files, and assured that we could do so via drag-and-drop. So we did; we dragged them into the folder in the dmg. We tested it (on the computer we were using to edit the dmg) and it seemed to work. So we burnt it onto a disk (along with a windows installer that actually works!). I've just gotten an email from the recipient. She's got a Mac laptop. She inserted the disk, doubleclicked on it, doubleclicked on the .dmg, and got a Warning: no mountable file systems. Screenshot: http://www.flickr.com/photos/97292258@N00/5101670174/ I have the dmg (not on a disk) and am able to open it with no difficulty. How can we get it to work for our recipient?

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • How to know strong name of GWT serialization policy at the time of host page generation?

    - by Alexander Vasiljev
    There is an excellent article describing a way to embed GWT RPC payload into the host page. A key element is missing there is how to know Strong Name of RPC serialization policy at run time. Strong Name is computed at the compile time, put into the client and obfurscated. Strong name is sent to the server with RPC request as described here. What would you suggest to make this parameter available at the time of host page generation?

    Read the article

  • How do make dependency generation work for C? (Also..decode this sed/make statement!)

    - by Derek
    Hi all. I have a make build system that I am trying to decipher that someone else wrote. I am getting an error when I run it on a redhat system, but not when I run it on my solaris system. The versions of gmake are the same major revision (one off on minor revision). This is for building a C project, and the make system has a global Makefile.global that is inherited by each directory's local Makefile The Makefile.global has all the targets in it, starting with all: $(LIB) $(BIN) where LIB builds libs and BIN builds binaries. jumping down the targets I have $(LIB) : $(GEN_LIB) $(GEN_LIB) : $(GEN_DEPS) $(GEN_OBJS) $(AR) $(ARFLAGS) $(GEN_LIB) $(GEN_OBJS) $(GEN_DEPS) : @set -e; rm -f $@; \ $(CC) $(CDEP_FLAG) $(CFLAGS) $(INCDIRS) `basename $@ | sed 's/\.d/\.c/' | sed 's,^,$(HOME_SRC)/,'` | sed 's,\(.*\)\.o: ,$(GEN_OBJDIR)/\1.o $@ :,g' > [email protected] ; \ cat [email protected] > $@ ; \ cat [email protected] | cut -d: -f2 | grep '\.h' | sed 's,\.h,.h :,g' >> $@ ; \ rm [email protected] $(GEN_OBJS) : $(CC) $(CFLAGS) $(INCDIRS) -c $(*F).c -lmpi -o $@ I think these are all the relevant targets I need to include to answer my question. Definitions of those variables: CC = icc CDEP_FLAG = -M CFLAGS = various compiler flags ifdef type flags INCDIRS = include directory where all .h files are GEN_OBJDIR = /lib/objs HOME_SRC = . GEN_LIB = lib/$(LIB) GEN_DEPDIR=/lib/deps GEN_DEPS = $(addprefix $(GEN_DEPDIR)/,$(addsuffix .d,$(basename $(OBJS)))) I think this has everything covered you need. Basically self explanatory from the names. Now as best I can tell, this is generating in /lib/deps a .d file that has the object and source dependencies in it. In other words, for the utilities.a library, I will get a utils.o and utils.c dependency stack, all in the file utils.d There is some syntax error that is being generated in that file I think, because I get the following error: ../lib/deps/util.d:25: *** target pattern contains no '%'. Stop. gmake[2]: *** [all] Error 2 gmake[1]: *** [all] Error 2 gmake: *** [all] Error 2 I am not sure if my error is in the dependency generation, or some further down part, like the object generation target? If you need further info, let me know, I will add to post

    Read the article

  • Can anyone give me a sample java socket programming for doing a peer to peer for 3 systems?

    - by Sadesh Kumar N
    I am doing an university project. I need some sample programs on peer to peer programs in java socket programming. Every where people are telling to add a server socket in the client program. I am in a confusion. Can a single program having server socket and client socket will do or i have to create two programs of one initiating a system and another peer program running thrice to solve the problem. or i need to create three programs for three peer systems. I am not clear on the architecture of building peer to peer programs using java sockets. Can some one help me giving a simple program on how to create a peer to peer connection between three systems. I know how to do a socket program for client server model and clear on the concept. But creating a peer to peer architecture sounds complex for me to understand. I also referred this thread. developing peer to peer in java The person commented second says" To make peer2peer app each client opens server socket too. When client A wishes to connect to client B it just connects to its socket. " Need some more sample and an explanation on how peer to peer java socket program works I dont want any external api like jxta to do this task. I need a clear picture on how it works alone with an example.

    Read the article

  • In a GUI based Application in Linux It is working properly in some systems,But segmentation fault (Because of SIGSEGV signal) is coming in others.Why? [closed]

    - by Sreejith
    The application consists of Driver code,a Source Object file(.so) ,and a Application code to interact with a hardware Card.. The problem comes in a mmap().It reads address from a card. But it is not getting the correct address in some systems.The Error is because of It is receiving a SIGSEGV signal and segmentation fault followed to that.But in some system which having the same version of kernel is not at all facing the problem and working properly. So please any one suggest the Reason and Remedy for this Problem.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >