Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 98/128 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • iPhone: How to Maintain Original Image Size Thoughout Image Edits

    - by maddy
    hi, I am developing an iPhone app that resizes and merges images. I want to select two photos of size 1600x1200 from photo library and then merge both into a single image and save that new image back to the photo library. However, I can't get the right size for the merged image. I take two image views of frame 320x480 and set the view's image to my imported images. After manipulating the images (zooming, cropping, rotating), I then save the image to album. When I check the image size it shows 600x800. How do I get the original size of 1600*1200? I've been stuck on this problem from two weeks! Thanks in advance.

    Read the article

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • Extending configuration for .Net 3.5 Applications

    - by Maximiliano Rios
    Due to a requirement in my current project, I have to build a configuration manager to handle configurations that merge local config info with database one. Custom configuration doesn't fit my needs, problem is that I don't know what's the type before loading certain information, for example: Loading database information I will able to know what's myhandler's type. Not previously. So I thought to write my own handler but I can't let set blank as type for sections, in fact .net requires to know what's the type to match myhandler nodes. I'm thinking on building a different parser to read XML nodes but I would prefer to match this structure. I've not found any information to do that yet, is there any way? Can I extend or hook up something into the framework to be capable of loading on-the-fly types and validate nodes? Thanks in advance.

    Read the article

  • Conditional Join - join 1 tables 2 ways

    - by Jon H
    I have a set of (not very well normalised or relational) tables named PLAN, GROUP, PRODUCT CLIENT Most have linkage i.e. PLAN - CLIENT on clno GROUP to PRODUCT on PRODCD However, the linkage between PLAN and GROUP is tricky. A plan has 2 field of interest GRPNO and PRODCD. What I want to do is if GRPNO != 0 then join GROUP on GRPNO. However if GRPNO = 0 then I want to join GROUP on PRODCD. The frustrating thing is that the fileds I want to return in my queries are the same across the board I just need to be able to vary the join, or join the same table twice. The best I can come up with is 2 queries and merge them using datasets, or possibly using a union. Is there a nifty way to do this in one select? I should point out I am access Foxpro over ODBC to do this. Thank you!

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

  • Clearcase: findmerge usage.

    - by Keshav
    Hi, I have a branch B1 and another branch B2. I want all files/subfolders (recursively) inside a particular folder X (and not on entire VOB) on B1 to be merged onto B2. What exact findmerge command do I need to use? The below commands will work for entire vob or if I run them by getting into the directory in question, that will suffice for me? cleartool findmerge . -type dir -nc -fver .../dev/LATEST -merge cleartool findmerge . -nc -type file -fver .../dev/LATEST -print Thanks a lot in advance.

    Read the article

  • Change the current branch to master in git

    - by Karel Bílek
    I have a repository in git. I made a branch, then did some changes both to the master and to the branch. Then, tens of commits later, I realized the branch is in much better state than the master, so I want the branch to "become" the master and disregard the changes on master. I cannot merge it, because I don't want to keep the changes on master. What should I do? (this will very possibly be a duplicate question, since it is trivial, but I have not found it here)

    Read the article

  • image_to_function in Rails

    - by FCastellanos
    I have this method on rails so that I have an image calling a javascript function def image_to_function(name, function, html_options = {}) html_options.symbolize_keys! tag(:input, html_options.merge({ :type => "image", :src => image_path(name), :onclick => (html_options[:onclick] ? "#{html_options[:onclick]}; " : "") + "#{function};" })) end I grabbed this code from the application helper of the redmine source code, the problem I'm having is that when I click on the image it's sending a POST, does some one know how can I stop that? This is how I'm using it <%= image_to_function "eliminar-icon.png", "mark_for_destroy(this, '.task')" %> Thanks alot!

    Read the article

  • How to calculate order (big O) for more complex algorithms (ie quicksort)

    - by bangoker
    I know there are quite a bunch of questions about big O notation, I have already checked Plain english explanation of Big O , Big O, how do you calculate/approximate it?, and Big O Notation Homework--Code Fragment Algorithm Analysis?, to name a few. I know by "intuition" how to calculate it for n, n^2, n! and so, however I am completely lost on how to calculate it for algorithms that are log n , n log n, n log log n and so. What I mean is, I know that Quick Sort is n log n (on average).. but, why? Same thing for merge/comb, etc. Could anybody explain me in a not to math-y way how do you calculate this? The main reason is that Im about to have a big interview and I'm pretty sure they'll ask for this kind of stuff. I have researched for a few days now, and everybody seem to have either an explanation of why bubble sort is n^2 or the (for me) unreadable explanation a la wikipedia Thanks!

    Read the article

  • MySQL Export with Column Heading

    - by st4nt0n
    Hello - I am very, very, new to mySQL. I've got experience in general technical terms, but not with the syntax or concepts of mySQL. I have been tasked with exporting a table from MySQL into a pipe delimited .txt or .xls that I can use to add 7500 more records to manually, then import back into the table. I tried to use INTO OUTFILE, but I don't get column headings, which I need for reference to merge the new records. Is there a good resource that can explain this to a complete novice? I would usually go down to my bookstore and start learning, but I'm on a bit of a time crunch. Thanks all!

    Read the article

  • Efficient mass string search problem.

    - by Monomer
    The Problem: A large static list of strings is provided. A pattern string comprised of data and wildcard elements (* and ?). The idea is to return all the strings that match the pattern - simple enough. Current Solution: I'm currently using a linear approach of scanning the large list and globbing each entry against the pattern. My Question: Are there any suitable data structures that I can store the large list into such that the search's complexity is less than O(n)? Perhaps something akin to a suffix-trie? I've also considered using bi- and tri-grams in a hashtable, but the logic required in evaluating a match based on a merge of the list of words returned and the pattern is a nightmare, and I'm not convinced its the correct approach.

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Using jQuery copy plugin from CSS Tricks

    - by ftntravis
    I'm trying to use this plugin that CSS Tricks suggests. http://css-tricks.com/snippets/jquery/duplicate-plugin/ Shouldn't the following allow me to click a button and create a copy? $.fn.duplicate = function(count, cloneEvents) { var tmp = []; for ( var i = 0; i < count; i++ ) { $.merge( tmp, this.clone( cloneEvents ).get() ); } return this.pushStack( tmp ); }; $('.copy').click(function() { $('#form li').duplicate(5).appendTo('#form'); }; It's not working when I click it :(

    Read the article

  • ASP.Net MVC elegant UI and ModelBinder authorization

    - by SDReyes
    We know authorization stuff is a cross cutting concern, and we do anything we could to avoid merge business logic in our views. But I still not found an elegant way to filter UI components (e.g. widgets, form elements, tables, etc) using the current user roles without contaminate the view with business logic. same applies for model binding. Example Form: Product Creation Fields: Name Price Discount Roles: Role Administrator Is allowed to see and modify the Name field Is allowed to see and modify the Price field Is allowed to see and modify the Discount Role Administrator assistant Is allowed to see and modify the Name Is allowed to see and modify the Price Fields shown in each role are different, and model binding needs to ignore the discount field for 'Administrator assistant' role. How would you do it?

    Read the article

  • Help setup my .git/config file for Heroku AND my Unfuddle Account

    - by 05WRXSTi
    Ok, I have three different computers that I work from and right now their configurations are all different so I have to push/pull a certain on each and its very bothersome. What I want to do is have ONE config file that I can use for all three that will allow me to do the following: git push unfuddle git pull heroku git push unfuddle git pull heroku And I'm new to git, so I know that maybe I need heroku master or 'heroku origin` or somethign? Here is what my config file looks like right now: [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = [email protected]:HEROKU-APP.git [branch "master"] remote = origin merge = refs/heads/master [remote "unfuddle"] fetch = +refs/heads/*:refs/remotes/origin/* url = [email protected]:UNFUDDLE-APP/UNFUDDLE-APP.git obviously the git urls were changed to protect the innocent. What should I change so that I can easily push and pull to/from both of these repos? Thanks!

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • Setting multiple datasource for a report in Asp.net

    - by Nandini
    Hi all, I have a report whose data is derived from two stored procedures.so i need to set these two datasources for generating the report.But the reports which have only one SP, ie.only one datasource works properly. For setting the datasource, i wrote code like this: dim reportdocument as ReportDocument Dim reportPath As String = Server.MapPath("CrystalRpts\Report.rpt") ReportDocument.Load(reportPath) 'Function for Setting the Connection SetDBLogonForReport(MyConnectionInfo, ReportDocument) dim dt1 as datatable=Datasource1 dim dt2 as datatable=Datasource2 dt1.merge(dt2) reportdocument.setDataSource(dt1) CrystalReportViewer.ReportSource=reportdocument But, the report is not generating.it shows the following error The Report requires additional information Servername:- Server Database:- Database UserID:- Password:- But the reports which have only one SP, ie.only one datasource works properly.What colud be reason for this error?

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • How sophisticated should be DAL?

    - by Andrew Florko
    Basically, DAL (Data Access Layer) should provide simple CRUD (Create/Read/Update/Delete) methods but I always have a temptation to create more sophisticated methods in order to minimize database access roundtrips from Business Logic Layer. What do you think about following extensions to CRUD (most of them are OK I suppose): Read: GetById, GetByName, GetPaged, GetByFilter... e.t.c. methods Create: GetOrCreate methods (model entity is returned from DB or created if not found and returned), Create(lots-of-relations) instead of Create and multiple AssignTo methods calls Update: Merge methods (entities list are updated, created and deleted in one call) Delete: Delete(bool children) - optional children delete, Cleanup methods Where do you usually implement Entity Cache capabilities? DAL or BLL? (My choice is BLL, but I have seen DAL implementations also) Where is the boundary when you decide: this operation is too specific so I should implement it in Business Logic Layer as DAL multiple calls? I often found insufficient BLL operations that were implemented in dozen database roundtrips because developer was afraid to create a bit more sophisticated DAL. Thank you in advance!

    Read the article

  • In Haskell, how can you sort a list of infinite lists of strings?

    - by HaskellNoob
    So basically, if I have a (finite or infinite) list of (finite or infinite) lists of strings, is it possible to sort the list by length first and then by lexicographic order, excluding duplicates? A sample input/output would be: Input: [["a", "b",...], ["a", "aa", "aaa"], ["b", "bb", "bbb",...], ...] Output: ["a", "b", "aa", "bb", "aaa", "bbb", ...] I know that the input list is not a valid haskell expression but suppose that there is an input like that. I tried using merge algorithm but it tends to hang on the inputs that I give it. Can somebody explain and show a decent sorting function that can do this? If there isn't any function like that, can you explain why? In case somebody didn't understand what I meant by the sorting order, I meant that shortest length strings are sorted first AND if one or more strings are of same length then they are sorted using < operator. Thanks!

    Read the article

  • WSSQL query for multiple computers at once

    - by Josh
    I can run normal searches just fine. Windows 7 won't let me add a network share to my local index, but I can query the remote index just fine. The problem is that I can't find a way to query two indexes at once. I was hoping that something like this would work: SELECT System.ItemName FROM compA.SystemIndex, compB.SystemIndex WHERE SCOPE='file://compA/pathA' OR SCOPE='file://compB/pathB' but it doesn't. For simple queries, I can query compA and compB separately and then merge the results myself, but I'm hoping for a better way. Anybody here have some experience with this?

    Read the article

  • Using set in Python inside a loop

    - by user210481
    I have the following list in Python: [[1, 2], [3, 4], [4, 6], [2, 7], [3, 9]] I want to group them into [[1,2,7],[3,4,6,7]] My code to do this looks like this: l=[[1, 2], [3, 4], [4, 6], [2, 7], [3, 9]] lf=[] for li in l: for lfi in lf: if lfi.intersection(set(li)): lfi=lfi.union(set(li)) break else: lf.append(set(li)) lf is my final list. I do a loop over l and lf and when I find an intersection between an element from l and another from lf, I would like to merge them (union) But I can't figure out why this is not working. The first to elements of the list l are being inserted with the append command, but the union is not working. My final list lf looks like [set([1, 2]), set([3, 4])] It seems to be something pretty basic, but I'm not familiar with sets. I appreciate any help Thanks

    Read the article

  • Cannot connect to windows server by name over vpn connection

    - by ErocM
    I have a rented dedicated windows server on a public ip that is acting as a SQL Server and VPN server. I need to connect to this server via computer name to get replication in place. I cannot use an ip address due to this issue: So, due to this, we are going the VPN route. That is my primary issue: After I am connected to this server's vpn, I can connect to SQL Server using the ip address but I cannot connect by the computer's name as you can see below... Right now, there is no hardware firewall on it since I had it removed to test this issue. I am running Windows 2008 Enterprise Server as the VPN server. I am not sure if the route print will help any from the workstation trying to connect but here is the info: IPv4 Route Table Active Routes: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.0.0.0 10.0.0.1 10.0.0.2 21 10.0.0.2 255.255.255.255 On-link 10.0.0.2 276 Any other info needed? Thanks for the help! ========= CLARIFICATION ON A FEW THINGS #1 ========= This is the server's info: This is the workstation that is trying to connect: I connect to the server via "Control Panel\Network and Internet\Network and Sharing Center\Connect or Disconnect" You can see here that I am connected: ========= CLARIFICATION ON A FEW THINGS #2 ========= I've tried to connect directly to the Sql Server as I did above but with the computers name and I couldn't get to it. Here I am trying to net view it from the workstation and it couldn't find it:

    Read the article

  • Extract page from PDF using iText and clojure

    - by KobbyPemson
    I am trying to extract a single page from a pdf with clojure by translating the splitPDF method I found here http://viralpatel.net/blogs/itext-tutorial-merge-split-pdf-files-using-itext-jar/ I keep getting this error IOException Stream Closed java.io.FileOutputStream.writeBytes (:-2) This prevents me from opening the document while the repl is still open. Once I close the repl I'm able to access the document. Why do I get the error? How do I fix it ? How can I make it more clojurey? (import '(com.itextpdf.text Document) '(com.itextpdf.text.pdf PdfReader PdfWriter PdfContentByte PdfImportedPage BaseFont) '(java.io File FileInputStream FileOutputStream InputStream OutputStream)) (defn extract-page [src dest pagenum] (with-open [ d (Document.) os (FileOutputStream. dest)] (let [ srcpdf (->> src FileInputStream. PdfReader.) destpdf (PdfWriter/getInstance d os)] (doto d (.open ) (.newPage )) (.addTemplate (.getDirectContent destpdf) (.getImportedPage destpdf srcpdf pagenum) 0 0))))

    Read the article

  • SVN Best practice for a "branch" of your main product ?

    - by Steffen
    At my job we develop websites - however now we're going to make a "whitelabelled" version of a site, which basically means it's the same site, however with a different logo and hosted on a different domain. Also it'll have minor graphical differences, but overall the engine is the same. My initial thought for keeping this in SVN, was to just make a branch for it - however I'm not quite certain if this could give me trouble later on. Normally I keep my branches somewhat short lived - mainly used for developing a new feature, without disturbing trunk. We need to be able to merge trunk changes into this "whitelabel" version, which I why I thought about branching it in the first place. So what's the best way to archive this ?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >