Search Results

Search found 13997 results on 560 pages for 'iron browser'.

Page 504/560 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • TCP/IP RST being sent differently in different browsers.

    - by Brian
    On Mac OS X (10.6), if I start a YouTube video download and pull the Ethernet cable for 5 or so seconds, then plug it back in, I get varying results depending on the browser. With Opera and Chrome, after I plug the cable back in the video continues to load. But with Safari and Firefox, it never does. Using Wireshark to look at the traffic, I found that Opera and Chrome simply ACK the first packet from YouTube after the cable has been plugged back in, but Safari and Firefox set the RST flag (0x4) in the TCP header and no more traffic follows. I can put a HUB in between the machine and the internet connection, the problem goes away and all four browsers continue loading the video when the cable is plugged back into the HUB. Again, looking at the Wireshark logs, it's evident that the machine doesn't see the Mulitcast connection close and there is simply a delay in the packets flowing through. So it seems that if Safari and Firefox sees a Multicast connection close, and then later see data on that same connection, they will send a RST. My question is why? What is the correct course of action, and why are 2/4 browsers doing it one way, while the other 2/4 are doing it another way? Is there somewhere in the code that I can see where this is happening in Firefox, for instance? Thank you very much.

    Read the article

  • Expose webservice directly to webclients or keep a thin server-side script layer in between?

    - by max
    Hi, I'm developing a REST webservice (Java, Jersey). The people I'm doing this for want to directly access the webservice via Javascript. Some instinct tells me this is not a good idea, but I cannot really explain that instinct. My natural approach would have been to have the webservice do the real logic and database access, but also have some (relatively thin) server-side script layer (e.g. in PHP). Clients would talk to the PHP layer which in turn would talk to the webservice. (The webservice would be pretty local to the apache/PHP server and implicitly trust calls from the script layer. The script layer would take care of session management.) (Btw, I am not talking about just hiding the webservice behind an Apache which simply redirects calls.) But as I find myself at a lack of words/arguments to explain my instinct, I wonder whether my instinct is right - note that while I have been developing all kinds of software in all kinds of languages and frameworks for like 17 years, this is the first time I develop a webservice. So my question is basically: what are your opinions? Are there any standard setups? Is my instinct totally wrong? Or partially? ;P Many thanks, Max PS: I might add a few bits of information about the planned usage of the whole application: will be accessed by different kinds of users, partly general public, partly privileged thus, all major OS/browser combinations can be expected as clients however, writing the client is not my responsibility will potentially have very high load/traffic logic of webservice will later be massively expanded for another product which is basically a superset of the functionality of the current project there is a significant likelihood that at some point an API should be exposed which can be used by 3rd party developers - obviously, with some restrictions at some point, the public view of the product should become accessible via smartphones, too (in other words, maybe a customized version of the site to adapt to the smaller display and different input methods)

    Read the article

  • Joomla ImageBrowser Lightbox not working

    - by jmorhardt
    I've been using an image browser for Joomla called ImageBrowser. It's great - easy uploads, easy for clients to use, even handles zip file uploads. I have Joomla v. 1.5.15 installed and the newest version of this plugin as well on 3 or 4 of our sites with no fuss or issues. Recently, the ImageBrowser on one of our sites started acting strangely. It was as if the Lightbox effect that we should have disappeared completely. I compared the settings for the site to another and found they were identical. I couldn't find a solution in the documentation or forums. Here's the URL to look at: http://neda.us/photos?view=gallery&folder=BSU+12-5-09. You can compare that to another of our sites with the same plugin and settings at South Oak Floors dot COM. You should be able to click on a thumbnail and get a full size view of the image in a Lightbox. Any help much needed and much appreciated.

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • using ftpWebRequest with an error: the remote server returned error 530 not loged in

    - by user1361207
    I am trying to use the ftpWebRequest in c# my code is // Get the object used to communicate with the server. FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://192.168.20.10/file.txt"); request.Method = WebRequestMethods.Ftp.UploadFile; // This example assumes the FTP site uses anonymous logon. request.Credentials = new NetworkCredential("dev\ftp", "devftp"); // Copy the contents of the file to the request stream. StreamReader sourceStream = new StreamReader(@"\file.txt"); byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); request.ContentLength = fileContents.Length; request.UsePassive = true; Stream requestStream = request.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close(); FtpWebResponse response = (FtpWebResponse)request.GetResponse(); Console.WriteLine("Upload File Complete, status {0}", response.StatusDescription); response.Close(); and I get an Error in request.GetRequestStream(); the error is: the remote server returned error 530 not loged in if I try to go in to a browser page and in the url I write ftp://192.168.20.10/ the brows page is asking me for a name and password, I put the same name and password and I see all the files and folders in the ftp folder. can any one help me with this problem?

    Read the article

  • Detect mouseover and show tooltip text for dots on an HTML Canvas

    - by carl asquith
    Ive recently created a "map" although not very sophisticated (im working on it) it has the basic function and is generally heading in the right direction. If you look at it you can see a tiny red dots and on those tiny red dots i want to mouseover it and see text basically but ive had a bit of trouble getting the code right. http://hummingbird2.x10.mx/website%20creation/mainpage.htm This is all the code so far. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Oynx Warrior</title> <link rel="stylesheet" type="text/css" href="mystyle.css" /> </head> <body> <h1>Oynx Warrior</h1> <canvas id="myCanvas" width="500" height="500" style="border:1px solid #c3c3c3;"> Your browser does not support the canvas element. </canvas> <script type="text/javascript"> var c=document.getElementById("myCanvas"); var cxt=c.getContext("2d"); cxt.fillStyle="#red"; cxt.beginPath(); cxt.arc(50,50,1,0,Math.PI*2,true); cxt.closePath(); cxt.fill(); </script> </body> </html>

    Read the article

  • Rails: Obfuscating Image URLs on Amazon S3? (security concern)

    - by neezer
    To make a long explanation short, suffice it to say that my Rails app allows users to upload images to the app that they will want to keep in the app (meaning, no hotlinking). So I'm trying to come up with a way to obfuscate the image URLs so that the address of the image depends on whether or not that user is logged in to the site, so if anyone tried hotlinking to the image, they would get a 401 access denied error. I was thinking that if I could route the request through a controller, I could re-use a lot of the authorization I've already built into my app, but I'm stuck there. What I'd like is for my images to be accessible through a URL to one of my controllers, like: http://railsapp.com/images/obfuscated?member_id=1234&pic_id=7890 If the user where to right-click on the image displayed on the website and select "Copy Address", then past it in, it would be the SAME url (as in, wouldn't betray where the image is actually hosted). The actual image would be living on a URL like this: http://s3.amazonaws.com/s3username/assets/member_id/pic_id.extension Is this possible to accomplish? Perhaps using Rails' render method? Or something else? I know it's possible for PHP to return the correct headers to make the browser think it's an image, but I don't know how to do this in Rails... UPDATE: I want all users of the app to be able to view the images if and ONLY if they are currently logged on to the site. If the user does not have a currently active session on the site, accessing the images directly should yield a generic image, or an error message.

    Read the article

  • Help Redirecting A Page to Another Page with adverts for 5 seconds, and then redirecting to another page.

    - by XcodeDev
    Hey, I am trying to redirect a page to another page, and that was working successfully. However I am trying to redirect the first page to another page with adverts. This page will then redirect to another page after five seconds. I am trying to do that by doing this: <?php include('ads.php'); ?> <?php sleep(2); $url = $_GET['url']; header("Location: ".$url.""); exit; ?> However it is showing the advert in ads.php perfectly, but it is not redirecting after five seconds. I am receiving this error in my web browser: Warning: Cannot modify header information - headers already sent by (output started at /home/nucleusi/public_html/adverts/ads.php:1) in /home/nucleusi/public_html/adverts/index.php on line 7 A typical link I would be redirecting to would be this: http://nucleusiphone.com/adverts/index.php/?url=http%3A%2F%2Fitunes.apple.com%2Fmx%2Falbum%2Fstill-got-the-blues%2Fid14135178%3Fi%3D14135158 Thanks in advanced. PS. I don't know any php so any code helps!

    Read the article

  • Cakephp 1.3, Weird behavior on firefox when using $this->Html->link ...

    - by ion
    Greetings, I am getting a very weird and unpredictable result in firefox when using the following syntax: $this->Html->link($this->Html->div('p-cpt',$project['Project']['name']) . $this->Html->div('p-img',$this->Html->image('/img/projects/'.$project['Project']['slug'].'/project.thumb.jpg', array('alt'=>$project['Project']['name'],'width'=>100,'height'=>380))),array('controller' => 'projects', 'action' => 'view', $project['Project']['slug']),array('title' => $project['Project']['name'], 'escape' => false),false); OK I know it is big but bear with me. The point is to get the following output: <a href="x" title="x"> <div class="p-ctp">Name</div> <div class="p-img"><img src="z width="y" height="a" alt="d" /></div> </a> I'm not sure if this validates correctly both on cakephp and html but it works everywhere else apart from firefox. You can actually see the result here: http://www.gnomonconstructions.com/projects/browser To reproduce the result use the form with different categories and press search. At some point it will happen!! Although most of the time it renders the way it should, sometimes it produces an invalid output like that: <a href="x" title="x"></a> <div class="p-cpt"> <a href="x" title="x">name</a> </div> <div class="p-img"> <a href="x" title="x"><img src="x" width="x" height="x" alt="x" /></a> </div> Looks like it repeats the link inside each element. To be honest the only reason I used this syntax was because cakephp encourages it. Any help will be much appreciated :)

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • IE8: weird border around HTML button element

    - by s427
    I have a button element with a custom background (image+color) and no borders except for a 2px border-bottom (and a bunch of other properties --code below) which renders quite differently in Firefox and in IE8. The problem is, this is a work for a company that uses IE8 as their only browser, so it's important that the button renders well in IE8. Here's a visual comparison between the two: My question here is not about the padding difference (I'm looking into that), but about the weird border that is visible on IE8 in addition to the regular border (border-bottom). Can anyone explain to me where it comes from and how to get rid of it? Thanks in advance. Here is the HTML code: <button class="btn" id="c_edit"> <span>Annuler</span> </button> And here is the CSS: .btn { display: inline-block; margin: 0 0 7px 5px; padding: 0; color: #ddd; font-size: 14px; font-family: FrutigerLTStd55Roman, sans-serif; text-decoration: none; border: none; border-bottom: 2px solid #222; background-color: #999; background-image: url('img/btn_bg.gif'); background-position: 0 bottom; background-repeat: repeat-x; cursor: pointer; transition: all .5s ease-out; } .btn span { display: inline-block; margin: 0; padding: 8px 10px 6px 40px; background-color: transparent; background-position: 4px 0; background-repeat: no-repeat; }

    Read the article

  • Sorting by some field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and only files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) After that for each result which is folder i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such thing?

    Read the article

  • General web service ideas

    - by user2014175
    I have a question regarding different types of web services. I'll preface this by saying that I have built a number of apps (for both ios and android) for personal use that interact with the web via php and sql. I have taught myself these languages, and as such don't have the broader background knowledge that many of you do. My question is, in what other ways can you perform an interaction between a web service and a mobile device other than mobile - php - sql - etc. For example, If I built a very simple tracking app for my car, my current method would be to push GPS coordinates from my iphone to my database at a set interval, then I would write a simple bit of javascript that pulled those coordinates out of the database and superimposed them on a google map. Is there a different way to do this? Such as the server acting as a live middle man who simple pushed the coordinates directly to a target browser? Without the database in the middle? If so, are there advantages and disadvantages to these different methods to achieve different goals? I know its a broad question but I'm really intrigued and I'm finding it difficult to word a google search for it. Any info / reading material suggesting would be excellent. Thanks

    Read the article

  • Why should I abstract my data layer?

    - by Gazillion
    OOP principles were difficult for me to grasp because for some reason I could never apply them to web development. As I developed more and more projects I started understanding how some parts of my code could use certain design patterns to make them easier to read, reuse, and maintain so I started to use it more and more. The one thing I still can't quite comprehend is why I should abstract my data layer. Basically if I need to print a list of items stored in my DB to the browser I do something along the lines of: $sql = 'SELECT * FROM table WHERE type = "type1"';' $result = mysql_query($sql); while($row = mysql_fetch_assoc($result)) { echo '<li>'.$row['name'].'</li>'; } I'm reading all these How-Tos or articles preaching about the greatness of PDO but I don't understand why. I don't seem to be saving any LoCs and I don't see how it would be more reusable because all the functions that I call above just seem to be encapsulated in a class but do the exact same thing. The only advantage I'm seeing to PDO are prepared statements. I'm not saying data abstraction is a bad thing, I'm asking these questions because I'm trying to design my current classes correctly and they need to connect to a DB so I figured I'd do this the right way. Maybe I'm just reading bad articles on the subject :) I would really appreciate any advice, links, or concrete real-life examples on the subject!

    Read the article

  • Prompting for authentication from a wxPython program and passing it along to IIS?

    - by MetaHyperBolic
    I have a client (written in Python, with a wxPython front end in dead-simple wizard style) which communicates a website running IIS. A python script receives requests and does the usual client-server dance. I would have written this as a browser application, but for the requirement that certain things happen on the local PC that the web can't help with (file manipulation, interfacing with certain USB hardware, etc.) Right now, I am simply using the logon credentials, compounded as a string from os.environ['USERDOMAIN'] and os.environ['USERNAME'], to pass along to the server, which connects to Active Directory and enumerates the members of the group, looking for those logon credentials. It's an ugly hack, but it works. Obviously, I could make people log out of the generic helper accounts and log back into Windows using specific accounts. However, I wondered how feasible it would be to provide some kind of logon prompt wherein the user can type in a name and password, then some kind of authorization token could be passed on to IIS. This seems like something I would not want to do myself, given that amateurs almost always make huge security mistakes. Now you can see why I am wishing this was purely web-based. What's a good way to handle this?

    Read the article

  • Been asked a dozen times, but no luck from what I've read. Prevent Anchor Jumping on page load

    - by jasenmp
    I'm currently working with WP theme that can be found here: sanjay.dmediastudios.com I'm currently using 'smooth scroll' on my page, I'm attempting to have the page smoothly scroll to the requested section when coming from an external link (for instance coming from the blog page takes you to sanjay.dmediastudios.com/#portfolio) from there I want the page to start at the top and THEN scroll to the portfolio section. What's happening is it briefly displays the 'portfolio section' (anchor jump) and THEN resets to the top and scrolls down. It's driving me nuts :(. Here is the code I'm using: Click function for smooth scroll: $(function() { $('.menu li a').click(function() { if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) + ']'); if (target.length) { $root.animate({ scrollTop: target.offset().top - 75 }, 800, 'swing'); return false; } } }); //end of click function }); The page load function: $(window).on("load", function() { if (location.hash) { // do the test straight away window.scrollTo(0, 0); // execute it straight away setTimeout(function() { window.scrollTo(0, 0); // run it a bit later also for browser compatibility }, 1); } var urlHash = window.location.href.split("#")[1]; if (urlHash && $('#' + urlHash).length) $('html,body').animate({ scrollTop: $('#' + urlHash).offset().top - 75 }, 800, 'swing'); }); Any help would be MUCH appreciated.

    Read the article

  • Why does this javascript code have an infinite loop?

    - by asdas
    optionElements is a 2d array. Each element has an array of length 2. These are an integer number and an element. I have a select list called linkbox, and i want to add all of the elements to the select list. The order I want them to go in is important, and is determined by the number each element has. It should be smallest to highest. So think of it like this: optionElements is: [ [5, <option>], [3, <option], [4, <option], [1, <option], [2, <option]] and it would add them to link box in order of those numbers. BUT that is not what happens. It is an infinite loop after the first time. I added the x constraint just to stop it from freezing my browser but you can ignore it. var b; var smallest; var samllestIndex; var x = 0; while(optionElements.length > 0 && ++x < 100) { smallestIndex = 0; smallest = optionElements[0][0]; b = 0; while( ++b < optionElements.length) { if(optionElements[b][0] > smallest) { smallestIndex = b; smallest = optionElements[b][0]; } } linkbox.appendChild(optionElements[smallestIndex][1]); optionElements.unshift(optionElements[smallestIndex]); } can someone point out to me where my problem is?

    Read the article

  • Applying drop shadows to divs

    - by CJD
    Hi everyone, I need a bit of help applying a drop shadow image to a range of DIV elements. The elements in question already have a background image so I am wrapping another DIV around them. Things get complicated further because I'm also using the 960gs CSS framework. This is my current HTML for a content box type display: <div class="grid_12 boxout-shadow-920"> <div class="boxout"> <p>planetCJD.co.uk is the personal site and blog of CJD. The site is still a work-in-progress but please do have a look around and let me know what you think! </p> </div> </div> Boxout CSS: .boxout { background:url("../images/overlay.png") repeat-x scroll 0 0 #EEEEEE; -moz-border-radius:4px 4px 4px 4px; border:1px solid #DDDDDD; margin-bottom:15px; padding:5px; } boxout-shadow-920 CSS: .boxout-shadow-920 { background:url("../images/box-shadow-920.png") no-repeat scroll 50% 101% transparent; } Now this works to a degree. The boxshadow image shows at the bottom of the content box which is what I would like. However as I'm using a fixed percentage of 101%, if the content box height is too small, not much of the drop shadow image gets shown, and if the content box is too big, whitespace starts to appear between the box and the shadow image. So anyway, what I'm looking for is a cross-browser CSS based solution for doing this properly. I'm sure there is an easy answer to this - any help is appreciated!

    Read the article

  • Mysterious HttpSession and session-config dependency

    - by OneMoreVladimir
    Good day. I'm developing a Java web app with Servlets\JSP using Tomcat 7.0. During request from client I put and object into the session and use forward. After the forward processing the same request the object can be retreived if the secure parameter is false otherwise it is not stored in session. <session-config> <session-timeout>15</session-timeout> <cookie-config> <http-only>true</http-only> <secure>true</secure> </cookie-config> <tracking-mode>COOKIE</tracking-mode> </session-config> I've figured out that "...cookies can be created with the 'secure' flag, which ensures that the browser will never transmit the specified cookie over non-SSL...". I've configured Tomcat to use SSL, but that haven't helped. Changing the tracking mode to SSL haven't helped as well. How do session-config and HttpSession object correlate in this case? What could be the problem?

    Read the article

  • Javascript function working strangely during the first call in CHROME ?

    - by Sohil
    HI all, Below mentioned javascript code works fine in all browsers including chrome(from second call onwards). function call(val){ url = window.location.href; indexnum = url.lastIndexOf("/"); str = url.slice(indexnum+1); window.location.href = url.replace(str, "sample.php?src_q=") + val; } I am calling this function on onclick of a link as below <?php echo "<a href='#' onclick='javascript:call(\"$fieldvalue\");'>$fieldvalue</a>" ?> Normal Behaviour : In all browser after clicking on the link new formed url is url://localhost/mysite/sample.php?src_q=val Strange Behaviour : When I click on the link for the first time in chrome value of variable val gets replaced by url and its value as follows http://localhost/mysite/sample.php?src_q=http://localhost/mysite/val This strange behaviour happens during the first click in chrome. From the second call onwards in the same tab, the value of variable val works fine and I get desired url. I tried to google on it, but couldn't found any explanation. Thanks in advance.

    Read the article

  • How to retrieve content via .load() or $.get() with this line

    - by Sin
    hello :) I posted a question a day or two ago about how to retrieve php via ajax method in this modal I was using. I kinda found out the right way to go about it, but there's still something I'm not doing right (obviously lol) Here's the section thats giving me the issues: jQuery('div that holds content').fadeIn(200).css({ 'width': Number( popWidth ) }); $('').load('/something/somewhere/this #content'); So, im using safari, and a local server (mamp), when I check activity in my browser, it shows that it is loading the content with every click, AND the pop up pops up, but no content. When I simply retrieve content via hidden div, ofcourse, i get it. This is what I'm trying to avoid. right now I have that div in my footer stashed as hidden. I'd rather just make a call when its needed, instead of loading it every single time a page is accessed. you can go here to see the whole script i posted in my last question: How to use ajax to show php in a modal pop up Anyone have any idea? I read that .load() has the ability to grab specific content from a request, but im not sure the major difference between that and $.get() I've tried both, and I get the same results. Im using wordpress, and wordpress's ajax requests run smooth as ever, so I know its not a local problem, i'ts my coding lol Ok....Im done typing :)

    Read the article

  • How to format dates in Jahia 6 CMS?

    - by dpb
    I am helping a friend of mine put up a site for his business. I’ve read different posts and sites trying to find the ideal CMS tool, but people have different views of what is the best, so I finally just picked one of them at random. So I went for an evaluation of Jahia 6.0-CE. As you’ve probably guessed by now, I don’t have so much experience with CMS tools. I just want to setup the CMS, write the templates for the site and let my friend manage the content from there on. So I extracted the sources from SVN and went for a test drive. I managed to create some simple templates to get a hang of things but now I have an issue with a date format. In my definitions.cnd I declared the field like so: date myDateField (datetimepicker[format='dd.MM.yyyy']) This is formatted in the page and the selector also presents this in the dd.MM.yyyy format when inserting the content. But how about sites in other countries, countries that represent the date as MM.dd.yyyy for example? If I specify the format in the CND, hard coded, how can I change this later on so that it adapts based on the browser’s language? Do I extract the content from the repository and format it by hand in the JSP template based on a Locale, or is there a better way? Thank you.

    Read the article

  • Increment the number of times an article has been read

    - by r.sendecky
    I have a situation where I need to increase the number of time article has been read. Once someone opens an article it should be reflected in the database by incrementing number of reads by one. Simple. Sending POST request to the server increments the number of reads by one. The article in question is supplied via URL parameter. Doing it manually by typing the URL in a browser works as expected. So server side is not at fault. My problems start with the javascript side of it or rather jquery. I hook the event to the article link. So every time a user clicks on the article link it increments the number of reads like so: $('#list-articles .article-link').click(function(e){ var oid = $(this).parent().parent().attr('data-oid').toString(); //Get the article id $.post( "/articles/viewed/" + oid ); }); Now this does not work! Number is not increased. I don't prevent default action since I need the link to actually open and display the article. Now if I put an alert right after the post like this: $('#list-articles .article-link').click(function(e){ var oid = $(this).parent().parent().attr('data-oid').toString(); //Get the article id $.post( "/articles/viewed/" + oid ); alert(oid); }); This variant works. After I dismiss the alert window, the number is incremented. Why is this so?? How can I fix this to actually work without the alert event present?

    Read the article

  • git submodule pull and commit automatically on webserver

    - by Lukas Oppermann
    I have the following setup, I am working on a project project with the submodule submodule. Whenever I push changes to github it sends a post request to update.php on the server. This php file executes a git command. Without submodules I can just do a git pull and everything is fine but with submodules it is much more difficult. I have this at the moment, but it does not do what I want. I should git pull the repo and update and pull the latest version of each submodule. <?php echo `git submodule foreach 'git checkout master; git pull; git submodule update --init --recursive; git commit -m "updating"' && git pull && git submodule foreach 'git add -A .' && git commit -m "updating to latest version including submodules" 2>&1s`; EDIT// Okay, I got it half way done. <?php echo `git submodule foreach 'git checkout master; git pull; git submodule update --init --recursive; git commit -am "updating"; echo "updated"' && git pull && git commit -am "updating to latest version including submodules" && echo 'updated'`; The echo prevents the script to stop because of non-zero returned. It works 100% fine when I run it from the console using php update.php. When github initialized the file, or I run it from the browser it still does not work. Any ideas?

    Read the article

  • Why Can't Businesses Upgrade their Browsers from IE6/IE7?

    - by viatropos
    I have read lots these past few weeks on IE6, seeing if it was really that bad to make it look right. I have just learned HTML and CSS this past year so I've been spoiled to start with basically CSS3 and HTML5, and I can do some really cool stuff super fast. I'm no IE6 master and I don't have years of experience with IE. So I thought it'd take a little time to figure out all the hacks to IE6/7 discovered and just implement them. But it's way harder than that (or maybe just way too much work). I'd have to either completely rebuild my design using "Internet Explorer 'Principles'", or cut out a lot of the neat things I could do using more recent technologies. For a million and one other reasons, everyone who builds things online seems to think IE should die. My question is, why can't businesses upgrade their browsers? When I work with businesses, they almost always resist the first time I ask, but 5 seconds later I'll show them what it looks like on my computer and talk about how great the latest stuff is (how much more secure later browser are, all the famous IE security cases, how much smoother and faster they new browsers are, how the IE team has basically missed the boat entirely, how much smoother business processes run, etc.), and they get excited! And within a few seconds they're up and running with Chrome or something. So can businesses not upgrade for some reasons? What are the reasons a business cannot upgrade? The main reason I think of is because they have an old version of windows. But a) wasn't there a legal case against this? and b) somebody must have figured out how to install Chrome or Firefox on ancient versions of Windows by now.

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >