Search Results

Search found 399 results on 16 pages for 'jean philippe murray'.

Page 6/16 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Centering background image, using css

    - by Jean
    Hello, I want to center a background image. There is no div used, this is the css style body{ background-position:center; background-image:url(../images/images2.jpg) no-repeat; } The above CSS tiles all over and does center it, but half the image is not seen, it just kind of moves up. What I want to do is center the image. Could I adopt the image to view even on a 21". Appreciate help. Thanks Jean

    Read the article

  • footer bar like facebook - css

    - by Jean
    Hello, I want to create a footer like facebook, that sticks at the bottom of the page, irrespective of the scroll. <div id="footer"></div> and here is the css style #footer{ position:absolute; left:0px; margin-bottom:0px; vertical-align:bottom; bottom:0px; width:100%; height:25px; background-color:#dfd5d7; overflow:hidden; } It comes fine, when I resize the browser window it does not stick to the bottom but a about 50 pixels up. Any solutions. Thanks Jean

    Read the article

  • nested divs should have 2 diff css styles

    - by Jean
    Hello, I have 2 nested divs, both have #x{ width:400; height:400px; background-color:#fff; color:#000 } #y{ width:200; height:200px; background-color:#000; color:#ccc; } <div id="y"><div id="x">Here lies a x value</div></div> I want the #x and #y to have individual css properties, but that is not the case, #x overrides the #y values Any help appreciated. Thanks Jean

    Read the article

  • centralizing background image, using css

    - by Jean
    Hello, I want to centralize a background image. There is no div used, this is the css style body{ background-position:center; background-image:url(../images/images2.jpg) no-repeat; } The above CSS tiles all over and does centralize it, but half the image is not seen, it just kind of moves up. What I want to do is centralize the image. Could I adopt the image to view even on a 21". Appreciate help. Thanks Jean

    Read the article

  • When Web page is displayed in a div, Error: Unable to modify the parent container element before the

    - by Jean
    Hello, A url is being displayed in a div as shown below <div id="ipage"> <? $url = "http://www.yahoo.com"; $file1 = fopen($url, "r"); $content = file_get_contents($url); echo $content; ?> </div> And I receive this error Webpage error details Message: HTML Parsing Error: Unable to modify the parent container element before the child element is closed (KB927917) Bright idea to solve this. Thanks Jean

    Read the article

  • Get id from a one div to another div

    - by Jean
    Hello Here are the two divs <div id="1"> <span id="a1">first</span> <span id="a2b">first</span> <span id="a3">first</span> </div> Click <div id=2></div> And the jQuery code to do $('#2').bind('click',function(){ var xx = $('#a'+i).attr('id'); $('#2').append(xx); i=i+1; }); It does not get all the 3 id's from #1 Thanks Jean

    Read the article

  • Keep text on left and middle of the div, not center

    - by Jean
    Hello, I want to be able to keep a text on the left, but in the middle of a div. <div id=sel>text goes here</div> and my CSS for the same sel{ text-align:left; vertical-align:middle; } The characters and lines of the text may vary. I am more focussed on the text with a single line that sits on the top. I do not want to use position:absolute/relative. Appreciate all help. Thanks Jean

    Read the article

  • on .bind('click') it is not deleting the first div

    - by Jean
    Hello, When I click on a particular div, that div should fade out, simple, but when I click on one of the divs it deletes the div on top of the stack. ie., when I click #sel6 it removes sel5 html code <div id="selc" class="selc" style="position:absolute; left:15px; top:200px; width:260px;"> <div id="sel5" class="sel">something</div> <div id="sel6" class="sel">something</div> <div id="sel7" class="sel">something</div> </div jquery code sel_id, sel_1 are variables $('.selc').bind('click',function(){ var sel_id = $('.sel').attr('id'); alert(sel_id); $('#'+sel_id).fadeOut('slow'); $('#'+sel_id).remove(); $('.search_box').append(sel_1); }); Thanks Jean

    Read the article

  • I want to get the value of an id from a nested div - jquery

    - by Jean
    Hello I want to obtain the .text() of #inner2 <div class="outer" id="outer"> <div id="inner1" class="inner">test1</div> <div id="inner2" class="inner">test2</div> <div id="inner3" class="inner">test3</div> </div> This is the jquery function I am using $('.outer').bind('click',function() { var one = $('#inner'+x).attr('id'); alert(one); }); The problem is the first #id value is show in the alert. Thanks Jean

    Read the article

  • Migrating from Apache2 to Lighttpd creating errors in PHP/mySQL?

    - by Jean-Philippe Murray
    Ok, I've been using basics ubuntu LAMP setups for years now, and I wanted to give lighttpd a try. My LAMP setup run in a virtual machine with scripts running just fine. So I created a new virtual machine, starting with a fresh install of ubuntu and made my setups. On this new VM, lighttpd + php works just fine. (Or at least it seems...) Problem occurs when I take the scripts from my LAMP setup and upload them to the new VM. I'm getting : Warning: mysql_real_escape_string(): Access denied for user 'www-data'@'localhost' (using password: NO) My lighttpd setup is configured as php-cgi but not my apache2 setup. Could this be the source of the problem? I think that scripts would be independent of the server configuration, so I doubt it. Also, I know that my DB connexion informations are good (as I can log in via phpmyadmin perfectly). I'm in the dark here, any pointers ? Thanks,

    Read the article

  • Why Is Web Sharing Broken on My Mac?

    - by Sam Murray-Sutton
    Background: I use my Mac for web development, running copies of web sites locally. I recently installed the Snow Leopard update, which to all intents and purposes seems to have gone fine, except... What's not working? Web-sharing; more specifically I can't turn it on via preferences. The preference pane just hangs when I try to. So Apache doesn't start on reboot. I can start Apache by hand, but I don't know enough to either setup apache to start with the computer, or to properly fix web sharing. Further details My Apache error log shows nothing on when the system boots up (as I would expect). This is the error message when I try to start web sharing from the sharing preference pane. 28/09/2009 10:58:05 System Preferences[834] setInetDServiceEnabled failed with 1 for org.apache.httpd Here's the messages given when I start apache from the command line. [Mon Sep 28 10:35:53 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Mon Sep 28 10:35:54 2009] [warn] mod_bonjour: Skipping user 'sams' - index file /Users/sams/Sites/index.html has zero length. [Mon Sep 28 10:35:54 2009] [notice] Digest: generating secret for digest authentication ... [Mon Sep 28 10:35:54 2009] [notice] Digest: done [Mon Sep 28 10:35:54 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 Phusion_Passenger/2.2.5 configured -- resuming normal operations Please let me know if you need any further details on this. Any help would be greatly appreciated. UPDATE I have added an answer of my own below - I was able to solve it thanks to being pointed in the right direction by the comments below, so thanks very much. But I'm still not totally clear as to what caused the problem or how my solution addressed it, so I'm leaving the question open for now.

    Read the article

  • How to install a downloaded Ruby gem file ?

    - by JCLL
    How does "gem install" works ? It is not intuitive... My gem is really here : [root@localhost Téléchargement]# ll *.gem -rw-rw-r-- 1 jean jean 16353818 mar 5 11:39 ruby-processing-1.0.9.gem But an idiomatic "gem install" does not see it... [root@localhost Téléchargement]# gem install ruby-processing-1.0.9.gem ERROR: could not find gem ruby-processing-1.0.9.gem locally or in a repository What's wrong with that ? Thx JC

    Read the article

  • Is there such thing as a portable database that runs easily OsX and Windows?

    - by Jean-Philippe Murray
    I need to make a small database for a project at school (not computer related at all, I'm indexing and categorizing paper documents of a research projet). The thing is that in september, my semester is over and other students will have to taker over the project (and so on, for every semester!), so I'd need something that would be free and OS agnostic (or at least OsX/Windows) so it would easily be given to the next students on the project. I was thinking about a WAMP running USB key that would have a MySQL / HTML interface, but it will become locked to the OS I choose first. LibreOffice and the likes will be an option in the end if I don't find anything truly portable. Anyone has a solution in mind?

    Read the article

  • Apply Group Policy to Remote Desktop Services users but not when they log on to their local system

    - by Kevin Murray
    Running Windows Server 2008 Service Pack 2 with Remote Desktop Services role. I want to hide the servers drives using a GPO, but not the users local drives when they are logged on to their local system. Using a GPO, I went to "User Configuration - Policies - Administrative Template - Windows Components - Windows Explorer" and enabled "Hide these specified drives in My Computer" and "Prevent access to drives from My Computer" and in both used "Restrict all drives". Then under "Security Filtering" for the GPO, I restricted it to the system running Remote Desktop Services and the specific users who will be using RDS. I then applied the GPO to our domain and it worked a little too well. Not only was I successful in getting the GPO to work for RDS users, but it also affected those same users at their local systems as well. I've tried everything I can think of, but can't figure out how to apply this just to the RDS but not at their local system. What am I missing?

    Read the article

  • Display all feeds using simplexml_load() using PHP

    - by Jean
    Hello, I want to loop to get all feeds, but displaying only one $url = "http://localhost/feeds/feeds.rss"; $xml = simplexml_load_file($url); foreach($xml->item as $result){ echo $result->description."<br>"; } RSS Feed is - <channel> <title>/</title> <link>/</link> <atom:link type="application/rss+xml" href="/" rel="self"/> <description>/</description> <language>/</language> <ttl>/</ttl> <item> <title>/</title> <description>/</description> <pubDate>/</pubDate> <guid>/</guid> <link>/</link> </item> <item> <title>/</title> <description>/</description> <pubDate>/</pubDate> <guid>/</guid> <link>/</link> </item> </channel> Thanks Jean

    Read the article

  • get id parameter and increment count for each click in jquery

    - by Jean
    Hello, I want to get the id value of c_i. And each time c_i is clicked I want to increment the value at chk_count, using jquery <div id='d<? echo $i; ?>' style='margin-bottom:8px; border:#cccccc thin solid; height:25px;'> <span style='color:#cccccc; margin-right:5px;'><? echo $i; ?></span> <span><? echo $row_dw_all['d1e']; ?></span> <span style='position:absolute;right:0px;'> <input type='checkbox' name='c_i<? echo $i; ?>' id="c_i<? echo $i; ?>" value='<? echo $row_dw_all['dle']; ?>'> </span> </div> <input type="hidden" name="chk_count" id="chk_count" value="" /> Thanks Jean

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • Big Data Accelerator

    - by Jean-Pierre Dijcks
    For everyone who does not regularly listen to earnings calls, Oracle's Q4 call was interesting (as it mostly is). One of the announcements in the call was the Big Data Accelerator from Oracle (Seeking Alpha link here - slightly tweaked for correctness shown below):  "The big data accelerator includes some of the standard open source software, HDFS, the file system and a number of other pieces, but also some Oracle components that we think can dramatically speed up the entire map-reduce process. And will be particularly attractive to Java programmers [...]. There are some interesting applications they do, ETL is one. Log processing is another. We're going to have a lot of those features, functions and pre-built applications in our big data accelerator."  Not much else we can say right now, more on this (and Big Data in general) at Openworld!

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >