Search Results

Search found 2613 results on 105 pages for 'strategy'.

Page 73/105 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • JCAPS deployment to multiple external system environments.

    - by ring bearer
    Hope a few people in here are familiar with JCAPS. Coming from pure j2ee world, it is difficult to digest the deployment model that JCPAS offers. While creating deployment profile, we need to map the resources (such as jdbc, webservice connector) to external systems. External systems are predefined with the target server ip, port, db name, credentials etc(in case of jdbc). So the problem is an EAR built for test environment can not be deployed to production environment. In simpler applications we could store database/credentials etc on to property files and hence EAR built for UAT could be deployed to Production with out any change. Is there a similar strategy available for JCAPS by which EARs built against an environment can be promoted to another seamlessly?

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • Jenkins merge dev branch to master with -Xtheir stategy

    - by Pandya M. Nandan
    In GIT Plugin, we can merge one branch to other on work space before the starting of Build. However, plugin does not provide -Xtheir strategy. Now i want dev branch to merge on master, validate test cases and if successful then only push back to master. Problem i face is that if there is a merge conflict, i want changes of dev branch to exits (i know i can use -Xtheirs manually). [Like i can check git as Source Code Management, but can;t use its Additional behavior 'Merge Before Build' ] However when i run the required code in Execute Shell Section of Jenkins Job, It does not works as required and fails with error: fatal: dev - not something we can merge Jenkins Job Code is echo Start of Build git checkout dev git checkout master git status git merge dev --no-commit echo End of Build I have also used them with bask -l -c "", but is same problem.

    Read the article

  • How to preserve data integrity while minimizing the transmission size

    - by user1500578
    we have sensors in the wild that send their data to a server every day via TCP/IP, either through 3G or through satellite for the physical layer. The sensors can automatically switch from one to the other depending on their location and the quality of the signal with the local 3G operator. Given that the 3G and satellite communications are very expensive, we want to minimize the amount of data to send. But also, we want to protect ourselves from lost data. What would be the best strategy to ensure with reasonable certainty that the integrity of our data is preserved, while minimizing the amount of redundancy, i.e the amount of data transmitted ? I've read about the zfec codec, but I'm not sure if we need to transmit all the chunks, or if we need to send a hash code along each chunk.

    Read the article

  • Top x rows and group by (again)

    - by Tibor Szasz
    Hello, I know it's a frequent question but I just can't figure it out and the examples I found didn't helped. What I learned, the best strategy is to try to find the top and bottom values of the top range and then select the rest, but implementing is a bit tricky. Example table: id | title | group_id | votes I'd like to get the top 3 voted rows from the table, for each group. I'm expecting this result: 91 | hello1 | 1 | 10 28 | hello2 | 1 | 9 73 | hello3 | 1 | 8 84 | hello4 | 2 | 456 58 | hello5 | 2 | 11 56 | hello6 | 2 | 0 17 | hello7 | 3 | 50 78 | hello8 | 3 | 9 99 | hello9 | 3 | 1 I've fond complex queries and examples, but they didn't really helped.

    Read the article

  • postgreSQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • Hibernate/JPA and PostgreSQL - Primary Key?

    - by Shadowman
    I'm trying to implement some basic entities using Hibernate/JPA. Initially the code was deployed on MySQL and was working fine. Now, I'm porting it over to use PostgreSQL. In MySQL, my entity class defines its primary key as an auto-incrementing long value with the following syntax: @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; However, I've found that I get errors with PostgreSQL when I try and insert numerous records at a time. What do I need to annotate my primary key with to get the same auto-incrementing behavior in PostgreSQL as I have with MySQL? Thanks for any help you can provide!

    Read the article

  • How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    - by Pat
    I have a friend who wishes to work as a freelance web developer, but insists that tables are the way forwards for layouts. Several points he maintains in favour of tables: 1 This is what was taught at the beginning of 10 years of programming & computer science degrees. 2 Large companies use tables to achieve 'technical' things. 3 It saves time I have coded him some examples of CSS exactly matching table based layouts, and provided many links to articles explaining SEO and accessibility benefits. From the perspective of a client, I have been explaining to him that I wouldn't hire someone using outdated methods as their main strategy for layout. As he is my friend and I wish him every success, I believe it is important for him to gain the best start when pitching for work. The question again: How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    Read the article

  • How to schedule a cron job to backup a MySql database every week?

    - by KevinM
    What is a command line I can use to back up a MySql database every single week into a file name with the date (so that it doesn't collide with previous backups)? Also, is this a reasonable backup strategy? My database is relatively small (a complete export is only 3.2 megs right now). The churn rate is relatively low. I need to be able to get the complete DB back if something goes wrong. And it would be extra cool if there's a way that I could see the changes that occur across a time span.

    Read the article

  • Rails Nested Attributes, Relationship for Shared or Common Object

    - by SooDesuNe
    This has to be a common problem, so I'm surprised that Google didn't turn up more answers. I'm working on a rails app that has several different kinds of entities, those entities by need a relation to a different entity. For example: Address: a Model that stores the details of a street address (this is my shared entity) PersonContact: a Model that includes things like home phone, cell phone and email address. This model needs to have an address associated with it DogContact: Obviously, if you want to contact a dog, you have to go to where it lives. So, PersonContact and DogContact should have foreign keys to Address. Even, though they are really the "owning" object of Address. This would be fine, except that accepts_nested_attributes_for is counting on the foreign key being in Address to work correctly. What's the correct strategy to keep the foreign key in Address, but have PersonContact and DogContact be the owning objects?

    Read the article

  • How can I dispose of an object (say a Bitmap) when it becomes orphaned ?

    - by Jelly Amma
    I have a class A providing Bitmaps to other classes B, C, etc. Now class A holds its bitmaps in a ring queue so after a while it will lose reference to the bitmap. While it's still in the queue, the same Bitmap can be checked out by several classes so that, say, B and C can both hold a reference to this same Bitmap. But it can also happen that only one of them checked out the Bitmap or even none of them. I would like to dispose of the bitmap when it's not being needed any more by either A, B or C. I suppose I have to make B and C responsible for somehow signaling when they're finished using it but I'm not sure about the overall logic. Should it be a call to something like DisposeIfNowOrphan() that would be called : 1 - when the Bitmap gets kicked out of the queue in class A 2 - when B is finished with it 3 - when C is finished with it If that's the best strategy, how can I evaluate the orphan state ? Any advice would be most welcome.

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

  • Supervisor callback for child normal exit

    - by Aler
    I am creating a test app where is one supervisor with simple_one_for_one strategy and many worker children added dynamically to it. How to implement callback (or receive a message) in supervisor that will be called when child exit normally? Main goal is to notify some other process that all supervised worker processes are done and it's time to show final report. How to design such kind of behavior? Should I create my own behavior that combine supervisor and gen_server, or there is a way to do this with standard otp behaviors?

    Read the article

  • Building a wiki like data model in rails question.

    - by lillq
    I have a data model in which I would like to have an item that has a description that can be edited. I would like to also keep track of all edits to the item. I am running into issues with my current strategy, which is: class Item < ActiveRecord::Base has_one :current_edit, :class_name => "Edit", :foreign_key => "current_edit_id" has_many :edits end class Edit < ActiveRecord::Base belongs_to :item end Can the Item have multiple associations to the same class like this? I was thinking that I should switch to keeping track of the edit version in the Edit object and then just sorting the has_many relationship base on this version.

    Read the article

  • Backing up locally modified and new source files

    - by eran
    I'm wondering how other programmers are backing up changes that are not under source control yet, be it new files or modified ones. I'm mostly referring to medium size jobs - hardly worth the effort of making a private branch, but taking more than a day to complete. This is not a vendor-specific question - I'd like to see if various products have different solutions to the problem. I'd appreciate answers referring to SVN and distributed SCCs, though. I'm mostly wondering about that latters (Mercurial, GIT etc.) - it's great that you have your own local repo, but do you back it up on a regular basis along with your source files? Note - I'm not asking about a general backup strategy. For that, we have IT. I'm seeking the best way to keep locally modified stuff backed-up before they are checked back into the main repo.

    Read the article

  • wcf possible to override the dipose of proxy

    - by pdiddy
    I'd like to override the Dispose method of generated proxy (ClientBase) because of the fact that disposing of a proxy calls Close and can throw an exception when the channel is faulted. The only way I came up was to create a partial class to my generatedproxy, make it inherit from IDisposable. : public partial class MyServiceProxy : IDisposable { #region IDisposable Members public void Dispose() { if (State != System.ServiceModel.CommunicationState.Faulted) Close(); else Abort(); } #endregion } I did some test and my Dipose method is indeed called. Do you see any issue with this strategy? Also, I don't like the fact that I'll have to create this partial class for every generated proxy. It be nice if I was able to make my proxy inherit from a base class...

    Read the article

  • postgres SQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • Validating Internationalized URLs - Is this going to be a problem?

    - by VirtuosiMedia
    After reading about the new Arabic URLs, and with more languages to come, how should URL validation be done for internationalized applications? Does the validation change at all and will existing solutions break? Is regex still a good approach? If so, what would that regex look like? If not, what's a good strategy? What are some good resources to read more on the topic? I ask this because it has the potential to cause a good many localized applications to have to be rewritten if they have to validate URLs at any point.

    Read the article

  • Mocking imported modules in Python

    - by Evgenyt
    I'm trying to implement unit tests for function that uses imported external objects. For example helpers.py is: import os import pylons def some_func(arg): ... var1 = os.path.exist(...) var2 = os.path.getmtime(...) var3 = pylons.request.environ['HTTP_HOST'] ... So when I'm creating unit test for it I do some mocking (minimock in my case) and replacing references to pylons.request and os.path: import helpers def test_some_func(): helpers.pylons.request = minimock.Mock("pylons.request") helpers.pylons.request.environ = { 'HTTP_HOST': "localhost" } helpers.os.path = minimock.Mock(....) ... some_func(...) # assert ... This does not look good for me. Is there any other better way or strategy to substitute imported function/objects in Python?

    Read the article

  • Constructor type not found

    - by WaffleTop
    Hello, What I am doing: I am taking the Microsoft Enterprise Library 4.1 and attempting to expand upon it using a few derived classes. I have created a MyLogEntry, MyFormatter, and MyTraceListener which derive from their respective base classes when you remove the "My" from their names. What my problem is: Everything compiles fine. When I go to run a test using Logger.Write(logEntry) it errors right after it initializes MyTraceListener with an error message: "The current build operation (... EnterpriseLibrary.Logging.LogWriter, null]) failed: Constructor on type 'MyLogging.MyFormatter' not found. (Strategy type ConfiguredObjectStrategy, index 2) I figured it was something to do with the constructor so I tried removing it, add it, and adding a call to the base class LogFormatter. Nothing has worked. Does anyone have insight into this problem? Is it maybe a reference issue? Bad App.config configuration? Thank you in advance

    Read the article

  • Architecting a generic search result web control

    - by Bartek Tatkowski
    In a project I'm currently working for we've stumbled upon the need for several kinds of search results presentation controls. The search result are similar, but not identical. For example, in the "office search" result we might want to present the office name and location, while in the "document search" could contain document name, author and publishing date. These fields should be sortable. My current strategy is to employ the Factory pattern and do something like this: ISearchResult officeResults = SearchResultFactory.CreateOfficeSearchResults(data); ISearchResult documentResults = SearchResultFactory.CreateDocumentSearchResults(data); The problem is: I don't know how to implement the markup code. Should I just do Controls.Add(officeResults); in the containing page? Or is there some ASPX trickery to create generic web controls? Or maybe I'm overthinking this and just should create five classes? ;)

    Read the article

  • @OneToOne and @JoinColumn, auto delete null entity , doable?

    - by smallufo
    I have two Entities , with the following JPA annotations : @Entity @Table(name = "Owner") public class Owner implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private long id; @OneToOne(fetch=FetchType.EAGER , cascade=CascadeType.ALL) @JoinColumn(name="Data_id") private Data Data; } @Entity @Table(name = "Data") public class Data implements Serializable { @Id private long id; } Owner and Data has one-to-one mapping , the owning side is Owner. The problem occurs when I execute : owner.setData(null) ; ownerDao.update(owner) ; The "Owner" table's Data_id becomes null , that's correct. But the "Data" row is not deleted automatically. I have to write another DataDao , and another service layer to wrap the two actions ( ownerDao.update(owner) ; dataDao.delete(data); ) Is it possible to make a data row automatically deleted when the owning Owner set it to null ?

    Read the article

  • Avoiding dog-piling or thundering herd in a memcached expiration scenario

    - by Quintin Par
    I have the result of a query that is very expensive. It is the join of several tables and a map reduce job. This is cached in memcached for 15 minutes. Once the cache expires the queries are obviously run and the cache warmed again. But at the point of expiration the thundering herd problem issue can happen. One way to fix this problem, that I do right now is to run a scheduled task that kicks in the 14th minute. But somehow this looks very sub optimal to me. Another approach I like is nginx’s proxy_cache_use_stale updating; mechanism. The webserver/machine continues to deliver stale cache while a thread kicks in the moment expiration happens and updates the cache. Has someone applied this to memcached scenario though I understand this is a client side strategy? If it benefits, I use Django.

    Read the article

  • Design Solution For Storing-Fetching Images

    - by Chaitanya
    This is a design doubt am facing, I have a collection of 1500 images which are to be displayed on an asp.net page, the images to be displayed differ from one page to another, the count of these images will increase in the time to come, a.) is it a good idea to have the images on the database, but the round trip time to fetch the images from the database might be high. b.) is it good to have all the images on a directory, and have a virtual file system over it, and the application will access the images from the directory Do we have in particular any design strategy in a traditional database for fetching images with the least round trip time, does any solution other than usage of a traditional database exists? ps: I use SQL Server to store these images.

    Read the article

  • How to access the backing field of an inherited class using fluent nhibernate

    - by Akk
    How do i set the Access Strategy in the mapping class to point to the inherited _photos field? public class Content { private IList<Photo> _photos; public Content() { _photos = new List<Photo>(); } public virtual IEnumerable<Photo> Photos { get { return _photos; } } public virtual void AddPhoto() {...} } public class Article : Content { public string Body {get; set;} } I am currently using thw following to try and locate the backing field but an exception is thrown as it cannot be found. public class ArticleMap : ClassMap<Article> { HasManyToMany(x => x.Photos) .Access.CamelCaseField(Prefix.Underscore) //_photos //... } i tried moving the backing field _photos directly into the class and the access works. So how can i access the backing field of an inherited class?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >