Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 657/750 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Linker, Libraries & Directories Information

    - by m00st
    I've finished both my C++ 1/2 classes and we did not cover anything on Linking to libraries or adding additional libraries to C++ code. I've been having a hay-day trying to figure this out; I've been unable to find basic information linking to objects. Initially I thought the problem was the IDE (Netbeans; and Code::Blocks). However I've been unable to get wxWidgets and GTKMM setup. Can someone point me in the right direction on the terminology and basic information about #including files and linking files in a Cpp application? Basically I want/need to know everything in regards to this process. The difference between .dll, .lib, .o, .lib.a, .dll.a. The difference between a .h and a "library" (.dll, .lib correct?) I understand I need to read the compiler documentation I am using; however all compilers (that I know of) use linker and headers; I need to learn this information. Please point me in the right direction! :] So far on my quest I've found out: Linker links libraries already compiled to your project. .a files are static libraries (.lib in windows) .dll in windows is a shared library (.so in *nix) Thanks

    Read the article

  • Paypal Encrypted Website payments

    - by John Isaacks
    I am trying to integrate a PayPal Website Payments Standard Cart Upload payment type into my shopping cart. I integrated Google Checkout a while back and I did not find it overly confusing as I do paypal. I am getting info on how to encrypt it from here: https://cms.paypal.com/us/cgi-bin/?&cmd=_render-content&content_ID=developer/e_howto_html_encryptedwebpayments#id08A3I0P017Q Paypal says I need to generate a private key and a public certificate using OpenSSL. I went to OpenSSL and downloaded the latest release, which is just a folder containing various files but I see no application I can use, not sure what to do here. Even if I were to get OpenSSL to generate me a private key and public cert, the next step is to download either an MS or Java command line tool to create the encrypted cart ahead of time with the cart-total, tax, etc. which sounds crazy to me, like I am supposed to manually do this prior to every order?? Obviously I do not know the items in the cart the customer is going to buy before hand so I need this to be done on the fly on my website using PHP. But I am completely lost. There has to be a way to setup dynamic secure cart uploads to paypal. Can someone please point me in the right direction?

    Read the article

  • InstallExecuteSequence cache interferes with custom action operation

    - by Dima G
    I need to upgrade a product that could be installed in per-user context to a new version that is always in per-machine context. The requirements are: Whether the old version was installed in a Per-User (no matter who) or Per-Machine context should be completely seamless to an administrator user that performs the upgrade. The MSI upgrade should succeed without the need to know the password of the user that originally installed the previous version of the product in a Per-User context. The installation should be performed from a single .msi file (no setup.exe is allowed). The installation should be able to run in a silent (non-UI) mode. No reboots are allowed during installation. My strategy is to find in the beginning of the installation whether the product is already installed in per-user context, and if so, to transform the registry keys manually to Per-Machine context (I checked: no additional changes such as file system changes etc. are needed except this transform). I figured out how to move all appropriate keys in the registry from the user settings to the machine settings (pre-loaded appropriate user hive in case it didn't appear in HKEY_USERS) and created custom action that does it - and it does work when I run it as a stand-alone executable before running the MSI. The problem, however, is that when Windows Installer runs InstallExecuteSequence it first creates a 'cached product context' for all products. So when my custom action runs in the course of InstallExecuteSequence, this cache has already been created. Thus FindRelatedProducts action that determines if older product with same upgrade code exists looks on that cache and ignores the changes that my custom action applied. If before running the MSI the previous product was in per-user context, FindRelatedProducts will look at the cache and not apply the upgrade and remove the previous version, because the new product is in per-machine context, even though the previous product version is already configured to per-machine context in the registry by that time by my custom action. What can be done to solve this problem without violating the requirements stated above?

    Read the article

  • Java appending XML data

    - by Travis
    I've already read through a few of the answers on this site but none of them worked for me. I have an XML file like this: <root> <character> <name>Volstvok</name> <charID>(omitted)</charID> <userID>(omitted)</userID> <apiKey>(omitted)</apiKey> </character> </root> I need to add another <character> somehow. I'm trying this but it does not work: public void addCharacter(String name, int id, int userID, String apiKey){ Element newCharacter = doc.createElement("character"); Element newName = doc.createElement("name"); newName.setTextContent(name); Element newID = doc.createElement("charID"); newID.setTextContent(Integer.toString(id)); Element newUserID = doc.createElement("userID"); newUserID.setTextContent(Integer.toString(userID)); Element newApiKey = doc.createElement("apiKey"); newApiKey.setTextContent(apiKey); //Setup and write newCharacter.appendChild(newName); newCharacter.appendChild(newID); newCharacter.appendChild(newUserID); newCharacter.appendChild(newApiKey); doc.getDocumentElement().appendChild(newCharacter); }

    Read the article

  • Compiling gstreamer plugin in windows

    - by utnapistim
    Hello all, My question: What is the correct way to compile a gstreamer plugin in windows, so that it will be accepted by gstreamer (actually Songbird on top of gstreamer). My setup: I have downloaded the songbird sources following the steps described here and I have a trunk/dependencies/windows-i686-msvc8 directory within my svn sources with all the gstreamer binaries. I have created a gstreamer empty plugin skeleton following the steps detailed in the GStreamer Plugin Writer's Guide, and compiled it against the gstreamer binaries in the Songbird dependencies folder. The compilation was done with VS2010 RC1 (Visual Studio 2008 yelded the same results), using an empty DLL project with the .h and .c files generated using the GStreamer Plugin Writer's Guide. The DLL was lined with libcpmt.lib, libcmt.lib, ws2_32.lib, gobject-2.0.lib, gthread-2.0.lib, gstreamer-0.10-0.lib, glib-2.0.lib, kernel32.lib, nspr4.lib and ignoring all default libraries. I have compiled the files as both .c and .cpp with the same results. Testing: I have installed the Songbird binaries corresponding to the correct svn version, then installed Songbird Developer Tools addon and used it to create an addon for testing my gstreamer plugin. Songbird will not load the pluggin. I have also tried to load it with gst-launch.exe from the trunk/dependencies/windows-i686-msvc8/[...] directory and that generated runtime error R6034: An application has made an attempt to load the C runtime library incorrectly. Most resources I found for this problem recommended restarting or reinstalling windows :(.

    Read the article

  • Scapy install issues. Nothing seems to actually be installed?

    - by Chris
    I have an apple computer running Leopard with python 2.6. I downloaded the latest version of scapy and ran "python setup.py install". All went according to plan. Now, when I try to run it in interactive mode by just typing "scapy", it throws a bunch of errors. What gives! Just in case, here is the FULL error message.. INFO: Can't import python gnuplot wrapper . Won't be able to plot. INFO: Can't import PyX. Won't be able to use psdump() or pdfdump(). ERROR: Unable to import pcap module: No module named pcap/No module named pcapy ERROR: Unable to import dnet module: No module named dnet Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 122, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 34, in _run_code exec code in run_globals File "/Users/owner1/Downloads/scapy-2.1.0/scapy/__init__.py", line 10, in <module> interact() File "scapy/main.py", line 245, in interact scapy_builtins = __import__("all",globals(),locals(),".").__dict__ File "scapy/all.py", line 25, in <module> from route6 import * File "scapy/route6.py", line 264, in <module> conf.route6 = Route6() File "scapy/route6.py", line 26, in __init__ self.resync() File "scapy/route6.py", line 39, in resync self.routes = read_routes6() File "scapy/arch/unix.py", line 147, in read_routes6 lifaddr = in6_getifaddr() File "scapy/arch/unix.py", line 123, in in6_getifaddr i = dnet.intf() NameError: global name 'dnet' is not defined

    Read the article

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • problem with evolutionary algorithms degrading into simulated annealing: mutation too small?

    - by Schnalle
    i have a problem understanding evolutionary algorithms. i tried using this technique several times, but i always ran into the same problem: degeneration into simulated annealing. lets say my initial population, with fitness in brackets, is: A (7), B (9), C (14), D (19) after mating and mutation i have following children: AB (8.3), AC (12.2), AD (14.1), BC(11), BD (14.7), CD (17) after elimination of the weakest, we get A, AB, B, AC next turn, AB will mate again with a result around 8, pushing AC out. next turn, AB again, pushing B out (assuming mutation changes fitness mostly in the 1 range). now, after only a few turns the pool is populated with the originally fittest candidates (A, B) and mutations of those two (AB). this happens regardless of the size of the initial pool, it just takes a bit longer. say, with an initial population of 50 it takes 50 turns, then all others are eliminated, turning the whole setup in a more complicated simulated annealing. in the beginning i also mated canditates with themselves, worsening the problem. so, what do i miss? are my mutation rates simply too small and will it go away if i increase them? here's the project i'm using it for: http://stefan.schallerl.com/simuan-grid-grad/ yeah, the code is buggy and the interface sucks, but i'm too lazy to fix it right now - and be careful, it may lock up your browser. better use chrome, even thought firefox is not slower than chrome for once (probably the tracing for the image comparison pays off, yay!). if anyone is interested, the code can be found here. here i just dropped the ev-alg idea and went for simulated annealing. ps: i'm not even sure about simulated annealing - it is like evolutionary algorithms, just with a population size of one, right?

    Read the article

  • JQuery and IE8 - Form posting on tab change

    - by Tommy
    This issue is only happening in IE8 - Firefox 3.5 seems to handle it fine. I have JQuery UI tabs setup on a page. Within each tab is a form that users can do stuff from. I have defined in the select option of the using: $("#tabs").tabs({select: function(event, ui) {...}}); a submitForm function that submits the form of the tab the user was on previous to changing tabs. This all works in all browsers. The issue comes in that IE does both the POST of the form on the previous tab and the GET for content of the newly requested tab and or really close to the same time (from what I can tell walking through the debugger). As a result, if a tab is dependent on the form input from another tab, the data is stale - does not match the user input from the previous tab. How can I either a) force the POST to complete before rendering the next tab or b) force IE to not make the POST and GET for the next tab at the same time?...or c) some other option?

    Read the article

  • CATransaction: Layer Changes But Does Not Animate

    - by macinjosh
    I'm trying to animate part of UI in an iPad app when the user taps a button. I have this code in my action method. It works in the sense that the UI changes how I expect but it does not animate the changes. It simply immediately changes. I must be missing something: - (IBAction)someAction:(id)sender { UIViewController *aViewController = <# Get an existing UIViewController #>; UIView *viewToAnimate = aViewController.view; CALayer *layerToAnimate = viewToAnimate.layer; [CATransaction begin]; [CATransaction setAnimationDuration:1.0f]; CATransform3D rotateTransform = CATransform3DMakeRotation(0.3, 0, 0, 1); CATransform3D scaleTransform = CATransform3DMakeScale(0.10, 0.10, 0.10); CATransform3D positionTransform = CATransform3DMakeTranslation(24, 423, 0); CATransform3D combinedTransform = CATransform3DConcat(rotateTransform, scaleTransform); combinedTransform = CATransform3DConcat(combinedTransform, positionTransform); layerToAnimate.transform = combinedTransform; [CATransaction commit]; // rest of method... } I've tried simplifying the animation to just change the opacity (for example) and it still will not animate. The opacity just changes instantly. That leads me to believe something is not setup properly. Any clues would be helpful!

    Read the article

  • OpenSL ES decode 24bit FLAC

    - by yano
    I am trying to decode a FLAC file with 24bit sample format using OpenSL ES on Android. Originally, I had my SLDataFormat_PCM for the SLDataSink setup like this. _pcm.formatType = SL_DATAFORMAT_PCM; _pcm.numChannels = 2; _pcm.samplesPerSec = SL_SAMPLINGRATE_44_1; _pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; _pcm.endianness = SL_BYTEORDER_LITTLEENDIAN; This is working well for basically any data format. Luckily the samplesPerSec is not respected (I don't want resampling). Now I want to support the full bit-depth of a FLAC file with 24bit samples. When using this format, it apparently performs a bit-depth conversion, because once I load the file, and then check the ANDROID_KEY_PCMFORMAT_BITSPERSAMPLE info, it is 16. When I put bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_24; or SL_PCMSAMPLEFORMAT_FIXED_32, then OpenSL ES rejects it E/libOpenSLES(22706): pAudioSnk: bitsPerSample=32 W/libOpenSLES(22706): Leaving Engine::CreateAudioPlayer (SL_RESULT_CONTENT_UNSUPPORTED) Any idea how this is meant to work? Is Android currently restricted to 16 bit int only? I would also accept 32bit float, but I don't suppose that will work either.

    Read the article

  • Porting - Shared Memory x32 & x64 processes

    - by dpb
    A 32 bit host Windows application setups shared memory (using memory mapped file / CreateFileMapping() API), and then other 32 bit client processes use this shared memory to communicate with each other. I am planning to port the host application to 64 bit platform and once it is ready, I intend that both 32 bit and 64 bit client processes should be able to use the shared memory setup by the main 64 bit host application. The original code written for host x32 application uses "size_t" almost everywhere, since this differs from 4 bytes to 8 bytes as we move from x32 to x64, I am looking for replacing it. I intend to replace "size_t" by "unsigned long long", so that its size will be same on 32 bit & 64 bit. Can you please suggest me better alternative? Also, will the use of "unsigned long long" have performance impact on x32 app .. i guess yes? Research Done - Found very useful articles - a) 20 issue in porting from 32 bit to 64 bit (www.viva64.com) b) No way to restrict/change "size_t" on x64 platform to 4 bytes using compiler flags or any hooks/crooks since it is typedef

    Read the article

  • SDL with OpenGL (freeglut) crashes on call to glutBitmapCharacter

    - by stett
    I have a program using OpenGL through freeglut under SDL. The SDL/OpenGL initialization is as follows: // Initialize SDL SDL_Init(SDL_INIT_VIDEO); // Create the SDL window SDL_SetVideoMode(SCREEN_W, SCREEN_H, SCREEN_DEPTH, SDL_OPENGL); // Initialize OpenGL glClearColor(BG_COLOR_R, BG_COLOR_G, BG_COLOR_B, 1.f); glViewport(0, 0, SCREEN_W, SCREEN_H); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0f, SCREEN_W, SCREEN_H, 0.0f, -1.0f, 1.0f); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); I've been using glBegin() ... glEnd() blocks without any trouble to draw primitives. However, in this program when I call any glutBitmapX function, the program simply exits without an error status. The code I'm using to draw text is: glColor3f(1.f, 1.f, 1.f); glRasterPos2f(x, y); glutStrokeString(GLUT_BITMAP_8_BY_13, (const unsigned char*)"test string"); In previous similar programs I've used glutBitmapCharacter and glutStrokeString to draw text and it's seemed to work. The only difference being that I'm using freeglut with SDL now instead of just GLUT as I did in previous programs. Is there some fundamental problem with my setup that I'm not seeing, or is there a better way of drawing text?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • ACL architechture for a Software As a service in Sprgin 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • Can I copy an entire Magento application to a new server?

    - by Gapton
    I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server. If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right? So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs. But basically, if I setup the environment right, it should run. Is my assumption correct? Thanks for answering!

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • Django sitemap intermittent www

    - by Jen Z
    The automatic sitemap for my Django site fluctuates between including the www on urls and leaving it out (I'm aiming to have it in all the time). This has ramifications in google not indexing my pages properly so I'm trying to narrow down what would be causing this issue. I have set PREPEND_WWW = True and my site record in the sites framework is set to include the www e.g. it's set to www.example.com as opposed to example.com. I'm using memcached but pages should expire from the cache after 48 hours so I wouldn't have thought that would be causing the issue? You can see the problem in effect at http://www.livingspaceltd.co.uk/sitemap.xml (refresh the page a few times). My sitemaps setup is fairly prosaic so I'm doubtful that that is the issue, but in case it's something obvious I'm missing here's the code: ***urls.py*** sitemaps = { 'subpages': Subpages_Sitemap, 'standalone_pages': Standalone_Sitemap, 'categories': Categories_Sitemap, } urlpatterns = patterns('', (r'^sitemap\.xml$', 'django.contrib.sitemaps.views.sitemap', {'sitemaps': sitemaps}), ... ***sitemaps.py*** # -*- coding: utf-8 -*- from django_ls.livingspace.models import Page, Category, Standalone_Page, Subpage from django.contrib.sitemaps import Sitemap class Subpages_Sitemap(Sitemap): changefreq = "monthly" priority = 0.4 def items(self): return Subpage.objects.filter(restricted_to__isnull=True) class Standalone_Sitemap(Sitemap): changefreq = "weekly" priority = 1 def items(self): return Standalone_Page.objects.all() class Categories_Sitemap(Sitemap): changefreq = "weekly" priority = 0.7 def items(self): return Category.objects.all()

    Read the article

  • How do I get a ComboBox SelectionChanged event to fire from a nested ListBoxItem?

    - by Stephen McCusker
    This is a rather complex problem that has me really confused right now. Any help would be greatly appreciated. The Setup: ListBox of Type A UserControls -ListBoxItem of Type A UserControl --ListBox of Type B UserControls ---ListBoxItem of Type B UserControl ----ListBox of Type C UserControls -----ListBoxItem of Type C UserControl (contains the ComboBox) In other words, the Type A control has a ListBox of Type B controls that has a ListBox of Type C controls. All of the controls are hierarchical in nature. Type A contains the data that's needed to load the Type B controls and the Type B contains the data that's needed to load the Type C controls. The Type C control has a standard ComboBox in it for changing the values of the present items. In addition to the above structure, I have drag and dropping tied to the PreviewMouseLeftButtonDown event on both the Type A and Type B UserControl levels to handle reordering/deleting/etc commands in the GUI. All of this is working as intended. The Problem: When I attempt to change the value in the ComboBox, the SelectionChanged event never fires on the Type C "level" unless I'm careful enough to click on the borders/spacing in between any Type A or B controls. This happens when my ComboBox popout menu overlaps on either a Type A or B control located below itself. The selection events for Type A or B are firing instead of the Type C events, so the ComboBox is never changing its value reliably. In the debugger, the code for handling the drag and drop is triggering on the next ListBoxItem that's located underneath the ComboBox. Thoughts: Is there a way I can make my ComboBox popup take prevalence over the items behind it while double-nested in a ListBox (ie, ignore anything behind it while it's open)? Is there some way to reroute the incorrectly firing SelectionChanged events down to the ComboBox that's supposed to be triggering them?

    Read the article

  • Atomic swap in GNU C++

    - by Steve
    I want to verify that my understanding is correct. This kind of thing is tricky so I'm almost sure I am missing something. I have a program consisting of a real-time thread and a non-real-time thread. I want the non-RT thread to be able to swap a pointer to memory that is used by the RT thread. From the docs, my understanding is that this can be accomplished in g++ with: // global Data *rt_data; Data *swap_data(Data *new_data) { #ifdef __GNUC__ // Atomic pointer swap. Data *old_d = __sync_lock_test_and_set(&rt_data, new_data); #else // Non-atomic, cross your fingers. Data *old_d = rt_data; rt_data = new_data; #endif return old_d; } This is the only place in the program (other than initial setup) where rt_data is modified. When rt_data is used in the real-time context, it is copied to a local pointer. For old_d, later on when it is sure that the old memory is not used, it will be freed in the non-RT thread. Is this correct? Do I need volatile anywhere? Are there other synchronization primitives I should be calling? By the way I am doing this in C++, although I'm interested in whether the answer differs for C. Thanks ahead of time.

    Read the article

  • jQuery image hover color overlay

    - by Ryan Max
    I can't seem to find any examples of this having been done anywhere on the internet before but here is what I am going to attempt to do...I'm trying to go about the cleanest possible way of laying this out. So I have an image gallery where the images are all different sizes. I want to make it so that when you mouseover the image, it turns a shade of orange. Just a simple hover effect. I want to do this without using an image swap, otherwise I'd have to create an orange colored hover-image for each individual gallery image, I'd like this to be a bit more dynamic. My plan is just to position an empty div over the image absolutely with a background color, width and height 100% and opacity: 0. Then using jquery, on mouseover I'd have the opacity fade to 0.3 or so, and fade back to zero on mouseout. My question is, what would be the best way to layout the html and css to do this efficiently and cleanly. Here's a brief, but incomplete setup: <li> <a href="#"> <div class="hover">&nbsp;</div> <img src="images/galerry_image.png" /> </a> </li> .hover { width: 100%; height: 100%; background: orange; opacity: 0; }

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >