Search Results

Search found 19923 results on 797 pages for 'instance variables'.

Page 478/797 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • How can I design my classes to include calendar events stored in a database?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCellList() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many calendar events (such as appointments or schedules) stored in a database. My question is: which is the best way to manage these calendar events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • What is the recommended MongoDB schema for this quiz-engine scenario?

    - by hughesdan
    I'm working on a quiz engine for learning a foreign language. The engine shows users four images simultaneously and then plays an audio file. The user has to match the audio to the correct image. Below is my MongoDB document structure. Each document consists of an image file reference and an array of references to audio files that match that image. To generate a quiz instance I select four documents at random, show the images and then play one audio file from the four documents at random. The next step in my application development is to decide on the best document schema for storing user guesses. There are several requirements to consider: I need to be able to report statistics at a user level. For example, total correct answers, total guesses, mean accuracy, etc) I need to be able to query images based on the user's learning progress. For example, select 4 documents where guess count is 10 and accuracy is <=0.50. The schema needs to be optimized for fast quiz generation. The schema must not cause future scaling issues vis a vis document size. Assume 1mm users who make an average of 1000 guesses. Given all of this as background information, what would be the recommended schema? For example, would you store each guess in the Image document or perhaps in a User document (not shown) or a new document collection created for logging guesses? Would you recommend logging the raw guess data or would you pre-compute statistics by incrementing counters within the relevant document? Schema for Image Collection: _id "505bcc7a45c978be24000005" date 2012-09-21 02:10:02 UTC imageFileName "BD3E134A-C7B3-4405-9004-ED573DF477FE-29879-0000395CF1091601" random 0.26997075392864645 user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioFiles [ 0 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "English" date 2012-09-22 01:15:04 UTC } 1 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "Spanish" date 2012-09-22 01:17:04 UTC } ]

    Read the article

  • How to install Gitlab in a VM on a production server?

    - by Michaël Perrin
    I have a production server running Ubuntu 12.04 and I would like to install on it a VM with Gitlab (using Vagrant and Virtualbox). Let's say that the address to access Gitlab is gitlab.mydomain.com . The DNS zone has been configured to point to the IP address of the server. I want users to be able to access to Gitlab (either for pushing to a repository, or for accessing to the web interface) from the outside. The VM has been configured to have an IP address. It means that when browsing http://gitlab.mydomain.com for instance, the request has to be forwarded to the VM on the server, ie. to the VM IP address. What are the ways to configure this? Can Apache be used as a proxy? In this case, I guess it only works for HTTP requests, but not for pushing to a Git repository on the VM.

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • Are there design patterns or generalised approaches for particle simulations?

    - by romeovs
    I'm working on a project (for college) in C++. The goal is to write a program that can more or less simulate a beam of particles flying trough the LHC synchrotron. Not wanting to rush into things, me and my team are thinking about how to implement this and I was wondering if there are general design patterns that are used to solve this kind of problem. The general approach we came up with so far is the following: there is a World that holds all objects you can add objects to this world such as Particle, Dipole and Quadrupole time is cut up into discrete steps, and at each point in time, for each Particle the magnetic and electric forces that each object in the World generates are calculated and summed up (luckily electro-magnetism is linear). each Particle moves accordingly (using a simple estimation approach to solve the differential movement equations) save the Particle positions repeat This seems a good approach but, for instance, it is hard to take into account symmetries that might be present (such as the magnetic field of each Quadrupole) and is this thus suboptimal. To take into account such symmetries as that of the Quadrupole field, it would be much easier to (also) make space discrete and somehow store form of the Quadrupole field somewhere. (Since 2532 or so Quadrupoles are stored this should lead to a massive gain of performance, not having to recalculate each Quadrupole field) So, are there any design patterns? Is the World-approach feasible or is it old-fashioned, bad programming? What about symmetry, how is that generally taken into acount?

    Read the article

  • Get the interface and ip address used to connect to a specific host (ip)

    - by umop
    I'm sure this has been asked and answered before, but I wasn't able to find it, so hopefully this will at least link someone to the right place. I want to find out my local interface and ip address used to reach a certain host. For instance, if I had 3 adapters connected to my box and they all three went to different networks, I'd like to know which of the three (specifically, its ip address) is used to reach my.local.intranet (in this case, it would be a vpn tunnel interface). I suspect this is a job for ifconfig or traceroute, but I haven't been able to find the correct switches. I'm running OSX 10.7 (Darwin) Thanks!

    Read the article

  • internet speed and routers are controlled by whom

    - by Ozgun Sunal
    i need to learn two things. each is related to other a bit. The first one is, while our LAN speed is usually 100 Mbps or at gigabit levels(very big compared to WAN speeds), WAN speed for instance DSL connections are far less than this. However, we are able to download huge files at those Mb speeds. Isn't this weird? [my real concern is why WAN speed is lower than LAN speeds] Who controls those routers around the large Internet? (while we, as web clients are connected to Internet, packets travel through those routers to the destination network/s).But, are those routers all inside the ISP network and if not, who controls those large numbers of routers?

    Read the article

  • Transparent JPanel, Canvas background in JFrame

    - by Andy Tyurin
    I wanna make canvas background and add some elements on top of it. For this goal I made JPanel as transparent container with setOpaque(false) and added it as first of JFrame container, then I added canvas with black background (in future I wanna set animation) to JFrame as second element. But I can't undestand why i see grey background, not a black. Any suggestions? public class Game extends JFrame { public Container container; //Game container with components public Canvas backgroundLayer; //Background layer of a game public JPanel elementsLayer; //elements panel (top of backgroundLayer), holds different elements private Dimension startGameDimension = new Dimension(800,600); //start game dimension public Game() { //init main window super("Astra LaserForces"); setSize(startGameDimension); setBackground(Color.CYAN); container=getContentPane(); container.setLayout(null); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //init jpanel elements layer elementsLayer=new JPanel(); elementsLayer.setSize(startGameDimension); elementsLayer.setBackground(Color.BLUE); elementsLayer.setOpaque(false); container.add(elementsLayer); //init canvas background layer backgroundLayer = new Canvas(); backgroundLayer.setSize(startGameDimension); backgroundLayer.setBackground(Color.BLACK); //set default black color container.add(backgroundLayer); } //start game public void start() { setVisible(true); } //create new instance of game and start it public static void main(String[] args) { new Game().start(); } }

    Read the article

  • Installing a fake microphone on Windows Server.

    - by Adrian
    My application, that is running on Windows Server (which is an instance on Amazon EC2) requires Skype to be able to make phone calls. The server, of course, does not have a microphone installed and I don't need it to have one, because my application changes the input source to a wav file when the call is established. However, Skype has a strict rule that a microphone must be installed for a call to be made. Thus I want to install a fake microphone that will trick Skype's configuration. So far, I was able to start and run the Windows Sound service, which enabled all of the sound settings. Any ideas are very welcome!

    Read the article

  • Firefox cannot recognize certificates for well knows sites

    - by RCola
    When trying to connect to well known sites, for instance hotmail.com Firefox shows that This Connection in Untrusted. In the OptionsAdvancedCertificates it's configured to select one matching certificate automatically. Why Firefox does not trust current connection? Can it be Man-in-the-middle attack or it's something like broken certificate storage on my computer? UPDATE UPDATE2 Solved: the problem is Antivirus Web Access protection. It interferes with HTTPS connection. Similar to Man-in-the-middle? Why ESET cannot do it correctly?

    Read the article

  • Issue with Windows Server backup

    - by mamu
    I have windows server 2008 r2 installed, Only service running on it is hyper-v. I am trying to take backup using windows server backup feature and it fails with following error in eventlog The backup operation that started at '?2009?-?08?-?22T18:42:14.123000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. Above error itself is point to other event logs for more detail but i can't find anything in event logs Then i ran following command vssadmin list writers It had following out of ordinary in list Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {d15c5f78-121c-464f-b23b-f285e919b05c} State: [8] Failed Last error: Inconsistent shadow copy How could i resolve this?

    Read the article

  • Looking for a better Factory pattern (Java)

    - by Sam Goldberg
    After doing a rough sketch of a high level object model, I am doing iterative TDD, and letting the other objects emerge as a refactoring of the code (as it increases in complexity). (That whole approach may be a discussion/argument for another day.) In any case, I am at the point where I am looking to refactor code blocks currently in an if-else blocks into separate objects. This is because there is another another value combination which creates new set of logical sub-branches. To be more specific, this is a trading system feature, where buy orders have different behavior than sell orders. Responses to the orders have a numeric indicator field which describes some event that occurred (e.g. fill, cancel). The combination of this numeric indicator field plus whether it is a buy or sell, require different processing buy the code. Creating a family of objects to separate the code for the unique handling each of the combinations of the 2 fields seems like a good choice at this point. The way I would normally do this, is to create some Factory object which when called with the 2 relevant parameters (indicator, buysell), would return the correct subclass of the object. Some times I do this pattern with a map, which allows to look up a live instance (or constructor to use via reflection), and sometimes I just hard code the cases in the Factory class. So - for some reason this feels like not good design (e.g. one object which knows all the subclasses of an interface or parent object), and a bit clumsy. Is there a better pattern for solving this kind of problem? And if this factory method approach makes sense, can anyone suggest a nicer design?

    Read the article

  • MMS gets hostname from uname and can't connect to it

    - by Adam Monsen
    I'm trying to get 10gen's MongoDB Monitoring Service monitoring my 3-node replica set. The replica set running in an AWS VPC. Each node runs on a different [virtual] machine. Assume their IPs are 192.168.1.1 (primary or secondary), 192.168.1.2 (primary or secondary), 192.168.1.3 (arbiter). From a quick look at the source, MMS appears to get the hostname of the machine it is running on like so: platform.uname()[1] For my VPC EC2 instance, this returns something like ip-192-168-1-1 MMS then tries to connect to this hostname, which does not resolve. I'd rather just use IP addresses (since they're always static), but it seems like the hardcoded use of platform.uname()[1] in mmsAgent.py precludes that. So, what's an elegant way out of this? Hack /etc/hosts? I'm not setting up a DNS server just for this. Maybe I'm just misunderstanding how to configure MMS.

    Read the article

  • Distribute Nagios to reduce false alarms

    - by GDR
    I'm currently running a single Nagios instance. From time to time, I'm getting false alarms about timeouts - for example, it says that HTTP is down on some server, but when I open it in my browser several seconds later, it loads fast, and in general there is no trace of an error. What can I do to reduce such false alarms? I'm guessing that it's because of transient network issues on my monitoring server. I guess that setting up another monitoring server on a different network would greatly help, but how do I plug it into Nagios? Is it at all possible with Nagios or do I have to switch to another monitoring system? I like my configs and, if possible, I'd like to stay with Nagios or something compatible (Icinga?)

    Read the article

  • Taking over and Moving a PHP site

    - by KCavon
    I have a internal use PHP site at my new position. It only runs a few days a year off site so we keep it on laptops. The hardware it has been on, a 8 year IBM Thinkpad running Fedora, is dying. I have new Lenovo Thinkpad's running latest and greatest Ubuntu. I have copied the contents of var to a shared drive, renamed the old www folder in var on the new machine and copied over the old www folder. I can get to the login page and into the site, but when I look something up it returns Cannot Open. I know I cannot get to the MySQL in the new machine because users and passwords dont match. The version of the PHP from the old machine is before the setup script was included. I know very little about PHP. I am looking for input on the proper way to link the old PHP files to my mysql instance. Any help, much appreciated.

    Read the article

  • Thin web server - single or multiple instances per IP address:port?

    - by wchrisjohnson
    I'm deploying a rack/sinatra/web socket app onto several servers and will use thin as the web server (http://code.macournoyer.com/thin/). There are almost no views to show, so I am not front-ending it with a traditional web server like Apache or nginx. In general, you see thin started and the underlying config file for it has the number of server instances to start, say 3, and the port to start with, say 5000. So, in my example, when thin starts, it starts up three instances on a range of ports, starting on port 5000. If I have a series of virtual machines, say 3, 6, 9, etc. that I treat as a cluster, would/should I choose to start a single thin instance on each VM, or multiple instances on each VM? Why? Thanks - Chris

    Read the article

  • startup cassandra layout

    - by davidkomer
    We've got a relatively low-traffic site (~1K pageviews/day) hosted on a single server, and expect it to grow significantly over the next few years. I'm thinking of moving over to Rackspace CloudServer or EC2 and firing up 3 nodes (all on CentOS): 2 x Web (Apache) - with loadbalancer 1 x MySQL (for the Wordpress powered part) The question is where to put Cassandra right now... Should it sit on each Web node, or the MySQL node? My thought right now is to put it on Web nodes. It's my understanding that Cassandra has the benefits of fault-tolerance (i.e. if we take a node down, the site is still operational). So even with only 2 nodes, we'd have that benefit as opposed to just putting it on the MySQL node. Also, as we scale up and add another node, a cassandra instance can come along with it and the php can always run its queries on localhost. Is this a good idea?

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • Forcing exact hostname match in IIS

    - by iis_newbie
    I am looking how to force an exact hostname match within IIS when using https. For instance, I want "https://works.mysite.com/resource" to be ok, but "https://noworks.mysite.com/resource" to return 404 (assuming they both resolve to the same IP). IIUC, the default behavior of IIS when going to "https://noworks.mysite.com/resource" is to get a cert warning, if the user presses continue, the user is able to access the URL. I was able to do this by generating a *.mysite.com SSL cert, and then specify the hostname within the bindings in IIS, but without the * in the beginning, the hostname field is disabled and blank. Am I missing something simple here?

    Read the article

  • Is my gedit-latex-plugin working properly?

    - by arroy_0209
    I have installed gedit-latex-plugin(0.2 rc3) to be used with gedit(2.30.3) in ubuntu 10.04. If I use the command gedit file.tex& in terminal the file is opened and it seems everything works fine but in the terminal, lots of comments appear, some of which are: 2012-03-31 22:14:27,263 DEBUG resources - Initializing resource locating 2012-03-31 22:14:27,361 DEBUG Preferences - not found 2012-03-31 22:14:27,373 DEBUG JobManager - Created JobManager instance 147209196 2012-03-31 22:14:27,379 DEBUG GeditLaTeXPlugin - activate 2012-03-31 22:14:27,379 DEBUG WindowContext - init 2012-03-31 22:14:27,444 DEBUG GeditWindowDecorator - _init_tab_decorators: initialized 0 decorators 2012-03-31 22:14:27,511 DEBUG GeditWindowDecorator - active_tab_changed 2012-03-31 22:14:27,511 DEBUG GeditWindowDecorator - ---------- ADJUST: None 2012-03-31 22:14:27,513 DEBUG GeditWindowDecorator - No window-scope views for this extension 2012-03-31 22:14:27,513 DEBUG GeditWindowDecorator - _set_selected_bottom_view: 0 2012-03-31 22:14:27,514 DEBUG GeditWindowDecorator - _set_selected_side_view: 0 2012-03-31 22:14:27,539 DEBUG GeditWindowDecorator - tab_added 2012-03-31 22:14:27,952 DEBUG GeditTabDecorator - loaded 2012-03-31 22:14:27,964 DEBUG GeditTabDecorator - _adjust_editor: URI has changed 2012-03-31 22:14:27,965 DEBUG LaTeXCompletionHandler - init 2012-03-31 22:14:27,966 DEBUG LanguageModelFactory - Pickled object found: /home/abcd/.gnome2/gedit/plugins/GeditLaTeXPlugin/latex.pkl 2012-03-31 22:14:28,075 DEBUG CompletionDistributor - init 2012-03-31 22:14:28,078 DEBUG WindowContext - Created view LaTeXOutlineView 2012-03-31 22:14:28,078 DEBUG WindowContext - Created view IssueView 2012-03-31 22:14:28,079 DEBUG LaTeXEditor - init(file:///home/abcd/dir1/file1.tex) 2012-03-31 22:14:28,079 DEBUG LaTeXEditor - Parsing document... 2012-03-31 22:14:28,080 DEBUG IssueView - init 2012-03-31 22:14:28,082 DEBUG IssueView - init finished 2012-03-31 22:14:28,092 INFO LaTeXEditor - LaTeXParser.parse: 0.010000 2012-03-31 22:14:28,092 DEBUG LaTeXEditor - Parsed 1599 bytes of content 2012-03-31 22:14:28,093 DEBUG LaTeXOutlineView - set_outline 2012-03-31 22:14:28,093 DEBUG LaTeXOutlineView - init 2012-03-31 22:14:28,097 DEBUG LaTeXValidator - validate 2012-03-31 22:14:28,098 DEBUG LanguageModel - set_newcommands: 2012-03-31 22:14:28,102 DEBUG LaTeXEditor - Parsing finished 2012-03-31 22:14:28,105 DEBUG GeditWindowDecorator - ---------- ADJUST: .tex 2012-03-31 22:14:28,119 DEBUG GeditWindowDecorator - _set_selected_bottom_view: 0 2012-03-31 22:14:28,120 DEBUG GeditWindowDecorator - _set_selected_side_view: 0 I am not sure if the gedit-latex-plugin is working properly or is it facing some problem. Why are there so many debug messages? Can anybody please suggest what I should do?

    Read the article

  • Is Page-Loading Time Relevant?

    - by doug
    Take this (ServerFault) page for instance. It has about 20 elements. When the last of these has loaded, the page is deemed "loaded"--but not before. This is certainly the protocol used by our testing service (which is among the small group of well-known vendors that offer that sort of service). Obviously this method is based on a clear, definite endpoint--therefore it's easy to apply w/ concomitant reliability. I think it's also the metric used by the popular Firefox plugin, 'YSlow.' For my employer's website, nearly always the last-to-load items are tracking code, tracking pixels, etc., so from the user's point of view--their perception--the page was "loaded" well before it had actually loaded based on the criterion used by our testing service (15-20% is a rough estimate). I'm sure i'm not the first person to consider this nor the first to wonder if it is causing micro-optimization while ignoring overall system-level, or user-perceived performance. So my question is, are there are other more practical (yet still reasonably precise) measures of page loading time?

    Read the article

  • How to Virtualize an OEM windows install.

    - by jumentous
    I've bought a new computer and like always it comes with windows 7 pre-installed. I'm a linux user by default but i still keep a virtual windows installation around. Is it possible to install my linux distribution, and use the OEM license that came with the computer to create the virtual instance? I have no intention of moving the license off the physical machine so i'm sure i could argue that i'm not violating the license but i don't expect that this would work and activate without great legal battles. So in the event that this doesn't work what other options do i have? Can i shrink the physical partition and have Qemu boot it? My thoughts are that windows would detect the change in hardware and fail. What can i do with this windows install as a linux user?

    Read the article

  • Working with the ADF DVT Map Component

    - by Shay Shmeltzer
    The map component provided by the ADF Faces DVT set of components is one that we are always ending up using in key demos - simply because it is so nice looking, but also because it is quite simple to use. So in case you need to show some geographical data, or if you just want to impress your manager, here is a little video that shows you how to create two types of maps. The first one is a color themed map - where you show different states with different colors based on the value of some data point there. The other is a point theme - basically showing specific locations on the map. For both cases I'm using the Oracle provided mapviewer instance at http://elocation.oracle.com/mapviewer. You can find more information about using the map component in the Web User Interface Developer's Guide here and in the tag doc and components demo. For the first map the query I'm using (on the HR demo schema in the Oracle DB) is: SELECT     COUNT(EMPLOYEES.EMPLOYEE_ID) , Department_name , STATE_PROVINCE FROM     EMPLOYEES,     DEPARTMENTS,     LOCATIONS WHERE employees.department_id=departments.department_idand Departments.location_id=locations.location_idGROUP BY Department_name,    LOCATIONS.STATE_PROVINCE

    Read the article

  • Graphic Setup tune-up checklist

    - by Click Ok
    I was trying to play the game Warzone 2100 and the games runs fine, with nice speed, but the screen stays with a horizontal lines "flickering"... My PC have a integrated GeForce Go 6100 vga. Ok, not a powerfull vga, but it's not the end of the world to run a "simple" game like this (compared with another games that ask you send your eyes to purchase a expensive vga). So, I think that the problem can be of the configuration of my machine. I use it in first instance for programming jobs, so I underpay attention to video setup. I would like about a checklist to know if my PC is "ready" to games. By example, I know that I need: Lastest vga drivers Updated DirectX and OpenGL What you suggest? There is too some good programs to test performance and suggests improvements in the system? Thank you! PS: I'm using Windows 7

    Read the article

  • Interaction between two Clouds

    - by user7969
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration. [using the Ubuntu Enterprise Cloud] Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....? Please reply.. I am sorry if I'm doing mistake anywhere... Thanks in advance :)

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >