Search Results

Search found 4045 results on 162 pages for 'automatic maintenance'.

Page 126/162 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How do I do a mail merge that includes images? (Maybe in Word 2007)

    - by Ian Ringrose
    I am trying to find out the practicalities of doing a mail merge when each “record” to be merged on includes some images. I need to: print letters And envelopes Both the letters and the envelopes have: Fixed text Fixed images Text that come from the mail merge record Images that come from the mail merge record I don’t know if all images will be the same size for every record, so a bit of simple “on the fly” automatic formatting may be needed . I need to be able to repeat a single item if I get a problem (e.g when folding the letter). What problems am I likely to have? Is Word 2007 up to this sort of mail merging, or should I be looking at a report writing tool? How do I restart a print run after a printer jam etc? What format should I store the “records” and there images in? E.g Can standard software cope with images that are stored in separate files named after the “CustomerId” that is in the “record” (I can write custom software if needed, but would rather use standard “of-the-shelf” software for the printing, I am planning on custom software for the data creation, so can output in whatever format is needed)

    Read the article

  • pgpool2+streaming replication failover on only 2 servers?

    - by aneez
    I am trying to configure pgpool2 and postgresql 9.1 to handle failover. I currently have streaming replication running, and are using pgpool2 for read-only load balancing. I have 2 servers in my setup, both running postgresql - 1 master and 1 slave. The master is also running pgpool2. My question is how do I configure this setup to handle failover? Specifically in the case that the master crashes, and the slave has to take over and run pgpool2 as well. Most documentation and examples I have been able to find assumes that pgpool2 is running on a separate server and thus "never" crashes. I may or may not be attacking the problem using the wrong tools. In my production setup I have a total of 3 identical servers all in independent locations. The main goal of the setup is to achieve a high uptime. Thus failover should be automatic, and bringing a failed node back up should cause only minimal downtime. I want all 3 nodes to be as close to identical as possible, and be able to run with just 1 or 2 nodes available. And if possible I want to use load balancing to improve performance. If anyone can help me gain some insight into how to do this using my current setup or suggest a different/better setup. Thank you!

    Read the article

  • Office 2010 Trust Center settings: How to enable data connections in the "old" way?

    - by GSerg
    We're planning an upgrade Office 2003 - 2010 and have identified a big problem. In Office 2003, if the workbook you're opening contains a query table that fetches data from a data source automatically (upon file open or in certain intervals), then a security dialog pops up - whether you want to allow that. If you say Yes, the queries will refresh automatically when they need to. If you say No, the queries will not refresh automatically, neither on file open nor on time intervals, but you will be able to refresh any of them manually at any time by right-clicking and selecting Refresh. There is also a registry parameter to say, Don't display that dialog, just allow the queries. This is exactly what we want. On users' computers we have the registry parameter applied, so the users never see any dialogs. On developers' computers the parameter is not applied, so every time a file is opened the developer decides whether to allow the auto-refreshing for the current session. Usually the answer is No, because for developing, it is essential to not have quieres refresh when they want to, but instead, refresh them when the developer wants. The problem is that in Office 2010 which we are testing we can't find a way to achieve this functionality: The allow/disallow messages are now grouped into one yellow button, that either allows everything or disallows everything (including, say, macros, if macro security is set to "Disable, but ask"). If you don't click the yellow Allow button, the queries are disabled completely, not just for automatic execution. You cannot right-click and refresh a particular query -- doing that would summon a security dialog prompting for enabling queries, and if you say Yes, all queries in the document will be enabled for auto-execution and will start executing immediately. This sort of ruins our development environment. Is there a way to get the trust thingies in Office 2010 to work in the same way as before? Is there a yet another registry parameter to say, Prompt for auto-refresh, but allow manual refresh even when auto-refresh is disabled?

    Read the article

  • PARTNER WEBCAST- INNOVATIONS IN PRODUCTS PROGRAM (FORMERLY KNOWN AS COMPETENCE VIRTUAL)

    - by mseika
    PARTNER WEBCAST- INNOVATIONS IN PRODUCTS PROGRAM (FORMERLY KNOWN AS COMPETENCE VIRTUAL) JULY 2ND, 2012 AT 04:00 PM CET (03:00 PM GMT)I am pleased to invite you to join the Innovations in Products –webcast. Innovations in Products will present Oracle Applications' Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other interested Oracle Applications system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A Document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle Applications products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle Application portfolio – for your and your customer’s benefit. Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall Applications portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Note: At the latter part of this email you have also 17 links into the recent Applications Products presentations and 6 links into the Public Sector Value Proposition presentations that were presented in Innovations in Industries -program. Product breakout sessions: Fusion Applications Technology and Extensibility Fusion Applications - Transforming your Back-Office Accounting Function Fusion HCM & Talent Overview & Extensibility Fusion HCM Compensation Planning Enterprise PLM for the Product Value Chain Oracle's Asset Management and Maintenance Solution For more details please visit Innovations in Products and other breakout sessions on OPN page. Delivery Format Innovations in Products –program is a series of FREE prerecorded Applications product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle’s contribution to Partners. Then you’ll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months.The next Innovations in Products web casts will be presented as follows: July 2nd 2012 October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. DurationMaximum 1 hour For further information please contact me Markku Rouhiainen.

    Read the article

  • Windows 7 black screen after boot animation

    - by t_virus
    This is happening with my windows 7 64-bit install frequently. Today i was trying to install an icon pack. There was already a restore point prior to it so i didn't feel like create one. I replaced the following system files in both system32 and sysWOW64 imageres.dll imagesp1.dll zipfldr.dll and shell32.dll After a reboot i see windows boot animation and then a black screen (with no mouse) I let it like that for 10 minutes and the laptop automatically went to sleep mode. I wake it up and see an error box this error "interactive logon failed to initialize" I close it and i'm left with a black screen again, this time with a mouse though. I tried using automatic system repair. It just tells me to remove any extra hardware (i have none attached atm) System restore finishes successfully but doesn't fix the problem. Safe mode doesn't work either. same black screen after boot animation Can someone help me with this? Also, I'll be grateful anyone can upload those default files (listed above) for both system32 and sysWOW64 from a windows 7 64bit install (no service pack)

    Read the article

  • Protocol (or service publish/discovery) to detect devices in network

    - by Gobliins
    we connect some embedded devices in a network. What i am looking for now, is a way to find the devices IP and identify them. We work with Windows PC´s and i am about to write a C# tool that should do this. I thought about send a udp broadcast and in the ack i.e. is the device´s ip, which would mean the device needs a daemon runnig to assign an ip itself. Running a service (like a printer) on the device, and on the PC just lookup for the service. I read about some things like apipa, zeroconf, ipv4 local link, bonjour, dns-sd, mdns, bonjour; They can automatically assign ip´s and publish services in a network. My Question is, can someone recommend me what would be good for my task? -The protocol or Service should be low on ressource (memory/cpu usage) use. -Are there some standard protocolls to use? -Is DNS a good idea or would it be to ressource consumpting just for finding a device´s IP? -Should also work when no dhcp servers are around. edit: To clarify a bit: The IP configuration is automatic. The problem to focus is how to tell the PC which IP in the network (or a direct connection in this vase there would only be one) belongs to the device (identity).

    Read the article

  • Image Management System with Different Levels of Access

    - by Jason
    I work in the graphics department for a real estate brokerage, and we deal with a lot of photos. Agents take the photos, upload them to me, I touch up and standardize the photos, then I add them to an in-house server for future use by the graphics dept. I'd like to make the "sanitized" photo files available to the agents to use when they want, but I don't want the agents poking around the graphics department's files (things get misplaced, renamed and messed up in a hurry). What would be perfect is if we could create a read-only "mirror" (correct term?) of that server that could be accessed by the agents as needed, but which wouldn't feed back into our "sanitized" file system. Edit: I'm looking for an automatic solution... manually posting the files to two separate locations seems like an inelegant solution when working digitally. Edit: I'm trying to avoid any access barriers to the public (dirty) file system (however it's finally implemented). There are 40-50 real estate agents who need to access these files, half of whom can't reliably download an email attachment.

    Read the article

  • How do I install gfortran (via cygwin and etexteditor) and enable ifort under Windows XP?

    - by bez
    I'm a newbie in the Unix world so all this is a little confusing to me. I'm having trouble compiling some Fortran files under Cygwin on Windows XP. Here's what I've done so far: Installed the e text editor. Installed Cygwin via the "automatic" option inside e text editor. I need to compile some Fortran files so via the "manage bundles" option I installed the Fortran bundle as well. However, when I select "compile single file" I get an error saying gfortran was missing, and then that I need to set the TM_FORTRAN variable to the full path of my compiler. I tried opening a Cygwin bash shell at the path mentioned (.../bin/gfortran), but the compiler was nowhere to be found. Can someone tell me how to install this from the Cygwin command line? Where do I need to update the TM_FORTRAN variable for the bundle to work? Also, how do I change the bundle "compile" option to work with ifort (my native compiler) on Windows? I've read the bundle file, but it is totally incomprehensible to me. Ifort is a Windows compiler, invoked simply by ifort filename.f90, since it is on the Windows path. I know this is a lot to ask of a first time user here, but I really would appreciate any time you can spare to help.

    Read the article

  • How can I tell System Restore in WIndows 7 recovery console to use my recovered backup drive's restore point data?

    - by Rich Shealer
    My Windows 7 desktop PC failed to boot. It would get to a grayish screen with a mouse and would only respond to the power button. After much examination I found that the problem was not a failed drive as running CHKDSK from the Recovery Console on my main drives passed without any errors. I had been installing various Java version in the days before the failure so I decided to use a restore point to roll backwards. I have an external SATA drive controller with two 2 TB drives mirrored using the Windows mirroring function. My system has been backing up to this drive regularly. The problem is I accidently broke the mirror when testing to see if this drive system might have been causing my boot issue. Connecting it to another machine showed two dynamic drives that were invalid. In the end I reformatted one as an NTFS basic disc and used recovery software on the other to copy all of the files to the reformatted drive. I had to copy the restore points into the new drive's System Volume Information folder by granting rights to that user. I moved the drive back to the original machine and rebooted. I can see my new drive, it even uses the same drive letter as it did in normal mode. Running System Restore it lists a new Automatic Restore point created while sitting at the RC along with all of my backups. Selecting the backup I want (or any other) I get a dialog. The backup drive could not be found. System Restore is looking for restore points on your backup. Make sure the backup drive is on and connected to this computer and then click OK. What do I need to do to allow system restore to see the restore points?

    Read the article

  • Memory Usage by Mapped Files (Win7 64Bit)

    - by Dexter
    When copying files from/to my external USB3 HDD memory usage in Win7 goes up to 100% and remains there. I'm not sure whether this is a problem caused by faulty drivers or not, but I already have the current version of them (Etron USB3 controller on a Gigabyte 990fxa board).. Using RAMMap it becomes obvious that the files, that are to be copied, are mapped into memory. Clicking on Empty > Empty System Working Set seems to temporarily fix the problem (without causing any trouble with the file copy process), but it needs to be done every few seconds. Is there any way to schedule this operation to happen ever 10 or so seconds on its own? What underlying system command is RAMMap using? Or, alternatively, is there any way to limit how much RAM mapped files may use in Windows 7? I know mapped files would usually be removed from memory if other programs need the memmory, but while memmory usage is at 100% the system starts freezing up for half a second or so everytime I click anything .. thus the automatic removal of unused memory contents seems to be failing here.

    Read the article

  • Can't login to Windows server 2008 (as any user, not even locally, not in safe mode but I have right credentials)

    - by Saix
    Just from nowhere I can't login to my Windows server 2008 machine. All the services like FTP server or webserver (which I'm actually not using, just remote desktop and FTP) are running. Whatever credentials I try (even/especialy administrator), it always says Unknown Username or bad password. I have already tried hard turn off/on and safe mode without luck. Also I already tried type in login name as SERVER NAME\user or Workgroup\user (every case sensitive scenario), still says I have wrong login. Usually we are using remote desktop to access the machine but local access over KVM doesn't work either. Now I'm lock out of any control or any way to do something. There's just logon screen preceding by ctrl+alt+del to login alert. Without me able to login I can't actually try to fix anything. Can't find much more on Internet except the SERVER NAME\user thing. Reinstall would be the last resort but I can't let things this way for much longer anyway. This server is vital. If it would be any help, I think automatic Windows updates are turned off and there were no updates or newly installed software for last couple years and just few soft restarts, non of them recently. It happened during it's runtime while all other services were still up and running, so this couldn't be just some Windows nasty screw up during boot or something. What could have possibly changed? What are my options now?

    Read the article

  • Path Not Found error when opening VB6 project from a shared folder on Virtual PC 2007 (XP sp3)

    - by law1185
    I currently work on a small software team that primarily maintains legacy software. I am trying to set up a VirtualPC that we can use to do this maintenance. Specifically, I would like to be able to debug and run VB6 web apps from a folder on the host pc. My constraints are as follows: The VirtualPC will not be registered on the domain. The server that hosts our Subversion repository does not run the subversion service so the only way to interact with the repository is through "file:\\", which requires domain authentication. It is not possible to debug/run VB6 web apps that are located on mapped network drives, because IIS requires that the VirtualPC be on the same domain as the network drive I would like to avoid having to copy the folder from the host pc to the VirtualPC and then copying it back in order to have the latest revision from Subversion So, I am trying to use VirtualPC's shared folder feature to share the host machine's Subversion directory and open the project in VB6 on the VirtualPC. Problem is that Visual Basic throws the error: "Path not found: '\\C:\\Subversion\Path\Project.vbp'" when I try to open it. Folder C:\Subversion on the host machine is mapped to G: on the VirtualPC. If anyone can help me resolve this error or find some other way to accomplish this, I would be deeply grateful. Oh, both host and virtual OS is Windows XP sp3. Using VB 6.0, IIS v5.1. I can manipulate files in the shared directory freely from the VirtualPC ie. copy, paste, delete, etc. Edit: Link to screenshot: http://img190.imageshack.us/img190/5439/vpcscreen.png

    Read the article

  • TF203015 The Item $/path/file has an incompatible pending change. While trying to unshelve.

    - by drachenstern
    I'm using Visual Studio 2010 Pro against Team Server 2010 and I had my project opened (apparently) as a solution from the repo, but I should've opened it as "web site". I found this out during compile, so I went to shelve my new changes and deleted the project from my local disk, then opened the project again from source (this time as web site) and now I can't unshelve my files. Is there any way to work around this? Did I blow something up? Do I need to do maintenance at the server? I found this question on SO #2332685 but I don't know what cache files he's talking about (I'm on XP :\ ) EDIT: Found this link after posting the question, sorry for the delay in researching, still didn't fix my problem Of course I can't find an error code for TF203015 anywhere, so no resolution either (hence my inclusion of the number in the title, yeah?) EDIT: I should probably mention that these files were never checked in in the first place. Does that matter? Can you shelve an unchecked item? Is that what I did wrong? EDIT: WHAP - FOUND IT!!! Use "Undo" on the items that don't exist because they show up in pending changes as checkins.

    Read the article

  • Selectively suppress XML Code Comments in C#?

    - by Mike Post
    We deliver a number of assemblies to external customers, but not all of the public APIs are officially supported. For example, due to less than optimal design choices sometimes a type must be publicly exposed from an assembly for the rest of our code to work, but we don't want customers to use that type. One part of communicating the lack of support is not provide any intellisense in the form of XML comments. Is there a way to selectively suppress XML comments? I'm looking for something other than ignoring warning 1591 since it's a long term maintenance issue. Example: I have an assembly with public classes A and B. A is officially supported and should have XML documentation. B is not intended for external use and should not be documented. I could turn on XML documentation and then suppress warning 1591. But when I later add the officially supported class C, I want the compiler to tell me that I've screwed up and failed to add the XML documentation. This wouldn't occur if I had suppressed 1591 at the project level. I suppose I could #pragma across entire classes, but it seems like there should be a better way to do this.

    Read the article

  • Default Transaction Timeout

    - by MattH
    I used to set Transaction timeouts by using TransactionOptions.Timeout, but have decided for ease of maintenance to use the config approach: <system.transactions> <defaultSettings timeout="00:01:00" /> </system.transactions> Of course, after putting this in, I wanted to test it was working, so reduced the timeout to 5 seconds, then ran a test that lasted longer than this - but the transaction does not appear to abort! If I adjust the test to set TransactionOptions.Timeout to 5 seconds, the test works as expected After Investigating I think the problem appears to relate to TransactionOptions.Timeout, even though I'm no longer using it. I still need to use TransactionOptions so I can set IsolationLevel, but I no longer set the Timeout value, if I look at this object after I create it, the timeout value is 00:00:00, which equates to infinity. Does this mean my value set in the config file is being ignored? To summarise: Is it impossible to mix the config setting, and use of TransactionOptions If not, is there any way to extract the config setting at runtime, and use this to set the Timeout property [Edit] OR Set the default isolation-level without using TransactionOptions

    Read the article

  • XtraGrid Suite - is there a way to add a button or hyperlink to a cell?

    - by calico-cat
    I'm working with the XtraGrid Suite made by DevExpress. I can't find any sort of functionality to do this, but I'm curious if you can add a button or hyperlink to a grid cell. Context: I've got an Events list. Each Event has a Time, Start/End, and a Category (Utility and Maintenance). There can be Start events and Stop events. Having done my analysis of the problem, I've decided that having a StartTime and EndTime for each event would not work. So if an event starts, I'd record the current time to the Event object, and set it as a 'Start' event. I'd like to add a "Stop" button/hyperlink to a cell in that row. If the user wishes to log an Ends event, the event type, etc would be copied to a new Event with the type 'Stop' and the button would disappear. I hope this makes sense. EDIT: Aaronaught's answer is actually better than what I was originally asking (a button) so I've updated the question. That way, anyone looking for putting a hyperlink in a cell can benefit from his example : )

    Read the article

  • Unit testing Monorail's RenderText method

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new FooController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( foo.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • [Linq to SQL] Multiple foreign keys to the same table

    - by cdonner
    I have a reference table with all sorts of controlled value lookup data for gender, address type, contact type, etc. Many tables have multiple foreign keys to this reference table I also have many-to-many association tables that have two foreign keys to the same table. Unfortunately, when these tables are pulled into a Linq model and the DBML is generated, SQLMetal does not look at the names of the foreign key columns, or the names of the constraints, but only at the target table. So I end up with members called Reference1, Reference2, ... not very maintenance-friendly. Example: <Association Name="tb_reference_tb_account" Member="tb_reference" <====== ThisKey="shipping_preference_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> <Association Name="tb_reference_tb_account1" Member="tb_reference1" <====== ThisKey="status_type_id" OtherKey="id" Type="tb_reference" IsForeignKey="true" /> I can go into the DBML and manually change the member names, of course, but this would mean I can no longer round-trip my database schema. This is not an option at the current stage of the model, which is still evolving. Splitting the reference table into n individual tables is also not desirable. I can probably write a script that runs against the XML after each generation and replaces the member name with something derived from ThisKey (since I adhere to a naming convention for these types of keys). Has anybody found a better solution to this problem?

    Read the article

  • What are common anti-patterns when using VBA

    - by Ahmad
    I have being coding a lot in VBA lately (maintenance and new code), specifically with regards to Excel automation etc. = macros. Typically most of this has revolved around copy/paste, send some emails, import some files etc. but eventually just ends up as a Big ball of mud As a person who values clean code, I find it very difficult to produce 'decent' code when using VBA. I think that in most cases, this is a direct result of the macro-recorder. Very helpful to get you started, but most times, there are one too many lines of code that achieve the end result. Edit: The code from the macro-recorder is used as a base to get started, but is not used in its entirety in the end result I have already created a common addin that has my commonly used subroutines and some utility classes in an early attempt to enforce some DRYness - so this I think is a step in the right direction. But I feel as if it's a constant square peg, round hole situation. The wiki has an extensive list of common anti-patterns and what scared me the most was how many I have implemented in one way or another. The question Now considering, that my mindset is OO design, what some common anti-patterns and the possible solutions when designing a solution (think of this - how would designing a solution using Excel and VBA be different from say a .net/java/php/.../ etc solution) ; and when doing common tasks like copying data, emailing, data importing, file operations... etc An anti-pattern as defined by Wikipedia is: In software engineering, an anti-pattern (or antipattern) is a pattern that may be commonly used but is ineffective and/or counterproductive in practice

    Read the article

  • nHibernate strategies in a web farm

    - by Pete Nelson
    Our current project at work is a new MVC web site that will use a WCF service primarily to access a 3rd party billing system via a web service as well as a small SQL database for user personalization. The WCF service uses nHibernate for the SQL database. We'd like to implement some sort of web farm for load balancing as well as failover and maintenance. I'm trying to decide the best way to handle nHibernate's caching and database concurrency if there are multiple WCF services running. Some scenarios I've been thinking about... 1) Multiple IIS servers, one WCF server. With this setup, the WCF server would be a single point of failure, but there would be no issues with nHibernate caching or database concurrency. 2) Multiple IIS servers, each with it's own WCF service. This removes a single point of failure, but now nHibernate on one machine would not know about database changes done by another machine. Some solutions to number 2 would be to use an IStatelessSession so we're not doing any caching and nHibernate is always fetching directly from the database. This might be the most feasible as our personalization database has very few objects in it. I'm also considering a 2nd-level cache such as memcached or Velocity, but it may be overkill for this system. I'm putting this out there to see if anyone has experience doing this sort of architecture and to get some ideas for a solution. Thanks!

    Read the article

  • Java EE suitablity for a social network using Cassandra datastore ??

    - by Marcos
    We are in the process of making some important technology decisions for a social networking application. We're planning to have Cassandra(a NoSQL database to support efficient data storage). We would be using Hector(a Java client) to interact with Cassandra. 1.) Would Java EE be a good choice over PHP for a social networking application in terms of performance, scalabilty & complexities? 2.) Another possible implementation strategy, Is it suitable to have backend alone in Java and rest in PHP? 3.) What differences(as compared to PHP) it makes in terms of costs at various stages of application development, deployment and maintenance ? 4.) What are the things to keep in mind as we move along with Java development& deployment(as we are relatively new to the Java background) ? 5.) If you could list some major production deployments of similar type(social network) applications in Java. Thank you!

    Read the article

  • To Ajax or Not to Ajax a listing page

    - by kaivalya
    Here i am talking about product listing pages where there are multiple filters that filter the list of products appearing on the page like product types, categories price range etc. I have done such pages using both ajax and no ajax way in the past. What I like about using ajax in such page is that, when filters are selected I only update the section that contains the product list. There is no need to refresh the whole page which could end up re-loading the images on top bar, banners etc and slow down the user performance. Ajax way in my opinion becomes more compact and responsive from user experience. Down side for ajax route for me is; since filter states are not maintained in the URL I end up maintaining them on the server. This becomes complicated if I want to handle multi window scenarios and it is also costly to maintain such state on server memory for each session. Not using ajax and simply keeping all filter values on url and refreshing the page is quite simple but the luxury of refreshing only the pane that really needs to be refreshed is lost. Lately I am seeing a lot of large scale e-commerce sites that are using non-ajax approach on their listing pages and this is making me question one more time if it might be more efficient to build non-ajax listing make due to the long term maintenance ease and sacrifice a little bit from user experience. I am about to start implementing a new listing page for a product which I have the flexibility to go either way and I would appreciate your inputs.

    Read the article

  • Source Control Manager Backend

    - by Gabriel Parenza
    Hi Friends, What do you think is a better approach for Source Control Manager Backend. I am weighing File system vs Hosted Subversion service. Hosted Subversion-- (My company already has another group taking care of this) Advantages: * Zero maintenance on our end * Auto-backup and recovery * Reliability by auto-backup and file redundancy. * File history view in built, file merge, file diff On the other hand, while File system does not have the featured mentioned above but is much more simpler. Moreover, if files are hosted on Linux machine, which is backed up, it takes care of file system crash issues. Subversion will need working copies, which are going to be on this same Linux machine, and hence the need to not have an extra layer. Folks, I am looking for stronger reasons why I should take Subversion instead of keeping thing simple and going with File System. Let me know your opinions. Very thanks in advance, Gabriel. PS: I have explored few Commercial Source Manager, and have decide to go this route as it better suits our need.

    Read the article

  • LaTeX: bibliography per chapter.

    - by YuppieNetworking
    Hello all, I am helping a colleague with his PhD thesis and we need to present the bibliography at the end of each chapter. The question is: does anyone have a minimal working example for this case using latex+bibtex? The current document structure that we use is the following: main.tex chap1.tex chap2.tex ... chapn.tex biblio.bib Where main.tex contains packages, document declarations, macros and \includes for each chapter. biblio.bib is the only bibtex file (I think is easier to have all citations in one place). We have searched and tried with different latex packages, reading and following their documentation. Specifically, bibitems and chapterbib. bibitems successfully generates bu*.aux files, but when running bibtex for each one of them, an error occurs since there is no \bibdata element in the .aux file. chapterbib also generates a .aux file, but bibtex finishes with an error caused by using multiple \bibliography{file} in the .tex files (one per chapter). Some coworkers suggested using a separate bibtex file for each chapter, which could be a problem of maintenance in the future when citing the same publications in different chapters. We will like to continue having this document structure, if possible. So, if anyone could shed some light to this problem, we will appreciate it. Thanks.

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >