Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 333/399 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Rails user authorization

    - by Zachary
    I am currently building a Rails app, and trying to figure out the best way to authenticate that a user owns whatever data object they are trying to edit. I already have an authentication system in place (restful-authentication), and I'm using a simple before_filter to make sure a user is logged in before they can reach certain areas of the website. However, I'm not sure the best way to handle a user trying to edit a specific piece of data - for example lets say users on my site can own Books, and they can edit the properties of the book (title, author, pages, etc), but they should only be able to do this for Books that -they- own. In my 'edit' method on the books controller I would have a find that only retrieved books owned by the current_user. However, if another user knew the id of the book, they could type in http://website.com/book/7/edit , and the controller would verify that they are logged in, then show the edit page for that book (seems to bypass the controller). What is the best way to handle this? Is it more of a Rails convention routing issue that I don't understand (being able to go straight to the edit page), or should I be adding in a before_find, before_save, before_update, etc callbacks to my model?

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • jQuery plugin Tablesorter 2.0 behaves weird

    - by gunther-von-goetzen-sanchez
    I downloaded the jQuery plugin Tablesorter 2.0 from http://tablesorter.com/jquery.tablesorter.zip and modified the example-pager.html which is in tablesorter/docs directory I wrote additional Rollover effects: $(function() { $("table") .tablesorter({widthFixed: true, widgets: ['zebra']}) .tablesorterPager({container: $("#pager")}); /** Additional code */ $("tr").mouseover(function () { $(this).addClass('over'); }); $("tr").mouseout(function () { $(this).removeClass('over'); }); $("tr").click(function () { $(this).addClass('marked'); }); $("tr").dblclick(function () { $(this).removeClass('marked'); }); /** Additional code END */ }); And of course modified the themes/blue/style.css file: /* Additional code */ table.tablesorter tbody tr.odd td { background-color: #D1D1F0; } table.tablesorter tbody tr.even td { background-color: #EFDEDE; } table.tablesorter tbody tr.over td { background-color: #FBCA33; } table.tablesorter tbody tr.marked td { background-color: #FB4133; } /* Additional code END*/ All this works fine BUT when I go to further pages i.e. page 2 3 or 4 the effects are gone! I really appreciate your help

    Read the article

  • Positioning problem in VIsta when Aero Theme is enabled

    - by Adeel Rehman
    Hi, I have a Windows form which has tab pages. On a Tab Page, I have a Combo Box. When I click the combobox a drop down opens up just below the combo box to give the impression of a drop down. The drop down is a windows form and this is how I set its position Popup.Location = control.PointToScreen(parentcontrol.PointToClient(new System.Drawing.Point(0, control.Height))); Popup = drop down that opens below the combo box (Windows Form) control = my combo box (Combo Box) parentcontrol = the windows form on which the control is present (Parent Form) Problem:- The X coordinate is not plotted correctly on the screen with Aero theme enabled. This works perfectly fine on XP (with some variance in y due to tablelayout panel i assume). But when i use the same code on Vista with Aero theme enabled the x-cordinate wanders away about 20-30 pixels. If I Turn Aero theme off on Vista, it works fine. I have found the X and Y coordiantes calculated in both cases are the same. But the way Vista draws these coordinates on the screen (when aero theme is enabled) is different. Is there any solution to this problem? Thanks, Adeel Rehman.

    Read the article

  • Symfony + JQueryUI Tabs navigation menu question

    - by Andy Poquette
    I'm trying to use the tabs purely as a navigational menu, loading my pages in the tabs, but being able to bookmark said tabs, and have the address bar correspond with the action I'm executing. So here is my simple nav partial: <div id='tabs'> <ul> <li id='home'><a href="<?php echo url_for('post/index') ?>" title="Home">Home</a></li> <li id='test'><a href="<?php echo url_for('post/test') ?>" title="Test">Test</a></li> </ul> </div> And the simple tabs intializer: $(document).ready(function() { $('#tabs').tabs({spinner: '' }); }); My layout just displays $sf_content. I want to display the content for the action I'm executing (currently home or test) in the tab, via ajax, and have the tab be bookmarkable. I've googled this like 4000 times and I can't find an example of what I'm trying to do. If anyone has a resource they know of, I can look it up, or if you can help, I would greatly appreciate it. Thank you in advance.

    Read the article

  • Inheritance of list-style-type property in Firefox (bug in Firebug?)

    - by Marcel Korpel
    Let's have a look at some comments on a page generated by Wordpress (it's not a site I maintain, I'm just wondering what's going on here). As these pages might disappear in the near future, I've put some screenshots online. Here's what I saw: Obviously, the list-item markers shouldn't be there. So I decided to look at the source using Firebug. As you can see, Firebug claims that the list-style property (containing none) is inherited from ol.commentlist. But if that's the case, why are the circle and the square visible? When checking the computed style, Firebug shows the list-style-types correctly. What's the correct behaviour? I just did a quick check in Chromium, whose Web Inspector gave a better view of reality (the list item markers were also displayed in this browser): According to WebKit, list-style of ol.commentlist isn't inherited, only the default value of list-style-type from the rendering engine. So, we may conclude that the output of both browsers is correct and that Firefox (Firebug) shows an incorrect representation of inherited styles. What does the CSS specification say? Inheritance will transfer the list-style values from OL and UL elements to LI elements. This is the recommended way to specify list style information. Not much about the inheritance of ol properties to uls. Is Firebug wrong in this respect? BTW, I managed to let the markers disappear by just changing line 312 of style.css to ol.commentlist, li.commentlist, ul.children { When also explicitly defining the list-style of ul.children to none, the markers are not painted. You can have a look at screenshots of Firebug and WebKit's Web Inspector in this case, if you like.

    Read the article

  • best way to switch between secure and unsecure connection without bugging the user

    - by Brian Lang
    The problem I am trying to tackle is simple. I have two pages - the first is a registration page, I take in a few fields from the user, once they submit it takes them to another page that processes the data, stores it to a database, and if successful, gives a confirmation message. Here is my issue - the data from the user is sensitive - as in, I'm using an https connection to ensure no eavesdropping. After that is sent to the database, I'd like on the confirmation page to do some nifty things like Google Maps navigation (this is for a time reservation application). The problem is by using the Google Maps api, I'd be linking to items through a unsecure source, which in turn prompts the user with a nasty warning message. I've browsed around, Google has an alternative to enterprise clients, but it costs $10,000 a year. What I am hoping is to find a workaround - use a secure connection to take in the data, and after it is processed, bring them to a page that isn't secure and allows me to utilize the Google Maps API. If any of you have a Netflix account you can see exactly what I would like to do when you sign-in, it is a secure page, which then takes you to your account / queue, on an unsecure page. Any suggestions? Thanks!

    Read the article

  • git pull crashes after other member push something

    - by naiad
    Here it's the story... we have a Github account. I clone the repository ... then I can work with it, commit things, push things, etc ... I use Linux with command line and git version 1.7.7.3 Then other user, using Eclipse and git plugin for eclipse eGit 1.1.0 pushes something, and it appears in the github web pages as the last commit, but when I try to pull: $ git pull remote: Counting objects: 13, done. remote: Compressing objects: 100% (6/6), done. remote: Total 9 (delta 2), reused 7 (delta 0) Unpacking objects: 100% (9/9), done. error: unable to find 3e6c5386cab0c605877f296642d5183f582964b6 fatal: object 3e6c5386cab0c605877f296642d5183f582964b6 not found "3e6c5386cab0c605877f296642d5183f582964b6" is the commit hash of the last commit, done by the other user ... there's no problem at all to browse it through web page ... but for me it's impossible to pull it. It's strange, because my command line output tells about that commit hash, so it knows that one is the last one commit in the github system, but my git can not pull it ! Maybe the git protocol used in eGit is incompatible with the console git 1.7.7.3...

    Read the article

  • What is the difference between cubes and the Unified Dimensional Model (if any)?

    - by ngm
    I'm currently researching SQL Server 2008 as a business intelligence solution, and currently looking at Analysis Services (and I'm pretty new to business intelligence as a whole...) I'm a bit confused by some of the terms in SSAS, particularly the conceptual differences between cubes and MS's Unified Dimensional Model. I believe that a cube in SSAS is basically an OLAP cube -- dimensions, measures, something that sits between the underlying data source and a business user. But then that's kind of what I understand UDM to be as well. The docs for SQL Server 2005 seem to suggest as much: "A cube is essentially synonymous with a Unified Dimensional Model (UDM)". But then the SQL Server 2008 pages sort of suggest that UDM is a wrapper for both multidimensional data (cubes) and relational data: "Use the Unified Dimensional Model to provide one consolidated business view for relational and multidimensional data that includes business entities, business logic, calculations, and metrics." This blog post suggests similarly: "UDM provides a single dimensional model for all OLAP analysis and relational reporting needs. So you can use either MDX or SQL" Is UDM something that sits above cubes? Or are they the same thing? I presume I would develop cubes with the Cube Designer application; what would I develop a UDM with?

    Read the article

  • Editing a User's Likes on Facebook

    - by Ed Marty
    I've been looking at the Facebook API to find some way to edit a user's Likes (that is, add or remove items from https://graph.facebook.com/me/likes/). The API doesn't say anything about it specifically, but does say this: You can publish to the Facebook graph by issuing HTTP POST requests to the appropriate connection URLs above. Where above, one of the connection URLs is the aforementioned https://graph.facebook.com/me/likes link. However, there's no documentation for the PROFILE_ID/likes post, and whenever I try to post it returns the error "invalid post_id". I assume this is because to like something, you post a request to POST_ID/likes. It's a bit inconsistent. What I'm trying to do is get the user's profile to add a Page to their likes (by posting using the page's id as an "id" parameter in the post body). However, it seems like there's just no way to edit user's likes. At the end of the day, I just want to allow a user to click a button in my application (mobile device application, not a web app) and have them add our Facebook page into their list of pages, and I've found no way of doing that short of presenting our page to them and making them click on the "Like" button manually. Many other things are supported without showing the Facebook website, like posting to their wall or making albums, but I can't find anything to do this. Any ideas?

    Read the article

  • Monitor file selection in explorer (like clipboard monitoring) in C#

    - by Christian
    Hi, I am trying to create a little helper application, one scenario is "file duplication finder". What I want to do is this: I start my C# .NET app, it gives me an empty list. Start the normal windows explorer, select a file in some folder The C# app tells me stuff about this file (e.g. duplicates) How can I monitor the currently selected file in the "normal" windows explorer instance. Do I have to start the instance using .NET to have a handle of the process. Do I need a handle, or is there some "global hook" I can monitor inside C#. Its a little bit like monitoring the clipboard, but not exactly the same... Any help is appreciated (if you don't have code, just point me to the right interops, dlls or help pages :-) Thanks, Chris EDIT 1 (current source, thanks to Mattias) using SHDocVw; using Shell32; public static void ListExplorerWindows() { foreach (InternetExplorer ie in new ShellWindowsClass()) DebugExplorerInstance(ie); } public static void DebugExplorerInstance(InternetExplorer instance) { Debug.WriteLine("DebugExplorerInstance ".PadRight(30, '=')); Debug.WriteLine("FullName " + instance.FullName); Debug.WriteLine("AdressBar " + instance.AddressBar); var doc = instance.Document as IShellFolderViewDual ; if (doc != null) { Debug.WriteLine(doc.Folder.Title); foreach (FolderItem item in doc.SelectedItems()) { Debug.WriteLine(item.Path); } } }

    Read the article

  • Refresh iFrame (Cache Issue)

    - by Ramiz Uddin
    We are getting a weird issue on which we are not sure what exactly cause it. Let me elaborate the issue. Suppose, we have two different html pages a.html and b.html. And a little script written in index.html: <html> <head> <script> function reloadFrame(iframe, src) { iframe.src = src; } </script> </head> <body> <form> <iframe id="myFrame"></iframe> <input type="button" value="Load a.html" onclick="reloadFrame(document.getElementById('myFrame'), 'a.html')"> <input type="button" value="Load b.html" onclick="reloadFrame(document.getElementById('myFrame'), 'b.html')"> </form> </body> </html> A server component is continuously updating both files a.html and b.html. The problem is the content of both files are successfully updating on the server side. If we open we can see the updated changes but client getting the older content which doesn't show the updated changes. Any idea?

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • ASP.NET MVC on GoDaddy Not Working (Not Primary Domain Deployment)

    - by JPrescottSanders
    I am trying to get ASP.NET MVC working on GoDaddy and I'm not having much luck. I have read the post on SO that covers the subject, but I must have a slightly different configuration or must be missing somehting along the way because the main MVC page comes up, but all links seem to fail and no amount of tweaking the URLs seems to get it to work. A little back ground. I have a single hosting plan with many domains pointed to sub folders of the main domain. Basic ASP.NET web forms pages work just fine, but of course I wanted to try and host a sample MVC site in one of these non-primary domains. You can go to the URL here. As you can see this first page comes up, but if you click on Home or About it doesn't work. Clicking on Home creates this link "http://www.jprescottsanders.com/jps/" and clicking on about creates this link "http://www.jprescottsanders.com/jps/Home/About". As you can see JPS sneaks in there, this of course is the sub folder that i place my web app files in. I would like to know if this is a MVC related issue or a GoDaddy issue. I suspect that MVC may want to sit in the root directory of the site, and when it puts the "jps" into the URLs it breaks the routing mechanisms (but this is conjecture). I know Dan said this was possible so I'm hoping he sees this and helps me get to the bottom of this deployment strategy for MVC.

    Read the article

  • How can I use R (Rcurl/XML packages ?!) to scrap this webpage ?

    - by Tal Galili
    Hi all, I have a (somewhat complex) webscraping challenge that I wish to accomplish and would love for some direction (to whatever level you feel like sharing) here goes: I would like to go through all the "species pages" present in this link: http://gtrnadb.ucsc.edu/ So for each of them I will go to: The species page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/) And then to the "Secondary Structures" page link (for example: http://gtrnadb.ucsc.edu/Aero_pern/Aero_pern-structs.html) Inside that link I wish to scrap the data in the page so that I will have a long list containing this data (for example): chr.trna3 (1-77) Length: 77 bp Type: Ala Anticodon: CGC at 35-37 (35-37) Score: 93.45 Seq: GGGCCGGTAGCTCAGCCtGGAAGAGCGCCGCCCTCGCACGGCGGAGGcCCCGGGTTCAAATCCCGGCCGGTCCACCA Str: >>>>>>>..>>>>.........<<<<.>>>>>.......<<<<<.....>>>>>.......<<<<<<<<<<<<.... Where each line will have it's own list (inside the list for each "trna" inside the list for each animal) I remember coming across the packages Rcurl and XML (in R) that can allow for such a task. But I don't know how to use them. So what I would love to have is: 1. Some suggestion on how to build such a code. 2. And recommendation for how to learn the knowledge needed for performing such a task. Thanks for any help, Tal

    Read the article

  • Factory Method Implementation

    - by cedar715
    I was going through the 'Factory method' pages in SO and had come across this link. And this comment. The example looked as a variant and thought to implement in its original way: to defer instantiation to subclasses... Here is my attempt. Does the following code implements the Factory pattern of the example specified in the link? Please validate and suggest if this has to undergo any re-factoring. public class ScheduleTypeFactoryImpl implements ScheduleTypeFactory { @Override public IScheduleItem createLinearScheduleItem() { return new LinearScheduleItem(); } @Override public IScheduleItem createVODScheduleItem() { return new VODScheduleItem(); } } public class UseScheduleTypeFactory { public enum ScheduleTypeEnum { CableOnDemandScheduleTypeID, BroadbandScheduleTypeID, LinearCableScheduleTypeID, MobileLinearScheduleTypeID } public static IScheduleItem getScheduleItem(ScheduleTypeEnum scheduleType) { IScheduleItem scheduleItem = null; ScheduleTypeFactory scheduleTypeFactory = new ScheduleTypeFactoryImpl(); switch (scheduleType) { case CableOnDemandScheduleTypeID: scheduleItem = scheduleTypeFactory.createVODScheduleItem(); break; case BroadbandScheduleTypeID: scheduleItem = scheduleTypeFactory.createVODScheduleItem(); break; case LinearCableScheduleTypeID: scheduleItem = scheduleTypeFactory.createLinearScheduleItem(); break; case MobileLinearScheduleTypeID: scheduleItem = scheduleTypeFactory.createLinearScheduleItem(); break; default: break; } return scheduleItem; } }

    Read the article

  • Cakephp ACL authentication issue - I'm locked out

    - by Baseer
    I've followed the CakePHP Cookbook ACL tutorial And as of right now I'm just trying to add users using the scaffolding method. I'm trying to go to /users/add but it always redirects me to the login screen even though I have added $this->Auth->allow('*'); in beforeFilter() temporarily to allow access to all pages. I've done this in both the UsersController and GroupsController as the tutorial asked. Below is my code for UsersController which I think will be the most relevant of all the files. Let me know if any other piece of code is required. <?php class UsersController extends AppController { var $name = 'Users'; var $scaffold; function beforeFilter() { parent::beforeFilter(); $this->Auth->allow('*'); } function login() { //Auth Magic } function logout() { //Leave empty for now. } } ?> I think I've pretty much followed the tutorial, any ideas as to what I may be missing? Thanks. I've been stuck on this for a while. =(

    Read the article

  • Website --> Add Reference also creates files with pdb extensions

    - by SourceC
    Hello, Q1 Any assemblies stored in Bin directory will automatically be referenced by web application. We can add assembly reference via Website -- Add Reference or simply by copying dll into Bin folder. But I noticed that when we add reference via Website -- Add Reference, that additional files with .pdb extension are placed inside Bin. If these files are also needed, then why does reference still work even if we only place referenced dll into Bin, but not pdb files Q2 It appears that if you add a new item to web project, this class will automatically be added to project list and we can reference it from all pages in this project. So are all files added to project list automatically being referenced ? thanx EDIT: On your second question, you are adding a public class to a namespace so will be visible to other classes in that project and in that namespace. I don’t know much about assemblies, but I’d assume the reason why item( class ) added to the project is visible to other classes in that project is for the simple fact that in web project all classes get compiled into single assembly and for the fact that public classes contained in the same assembly are always visible to each other? much appreciated

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • Drupal with clean urls turned on is putting question marks in URL

    - by aussiegeek
    I have a drupal site with clean urls, the pages load correctly, but then the URL is rewritten, which I really don't to happen. My .htaccess is: <IfModule mod_rewrite.c> RewriteEngine on # If your site can be accessed both with and without the 'www.' prefix, you # can use one of the following settings to redirect users to your preferred # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option: # # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # adapt and uncomment the following: # RewriteCond %{HTTP_HOST} ^example\.com$ [NC] # RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment and adapt the following: # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal # # If your site is running in a VirtualDocumentRoot at http://example.com/, # uncomment the following line: RewriteBase / # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_URI} !(connect|administration) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule>

    Read the article

  • jQuery show based on checkbox value at page load.

    - by Jacob Huggart
    I have an ASP MVC web app and on one of the pages there is a set of Main checkboxes with sub-checkboxes underneath them. The sub-checkboxes should only show up when the corresponding main checkbox is checked. I have the following code that works just fine as long as none of the checkboxes are checked when the page loads. $("input[id$=Suffix]").change(function() { prefix = this.id; if (!$(this).hasClass("checked")) { $("tr[id^=" + prefix + "]").show(); $(this).addClass("checked"); } else { $("tr[id^=" + prefix + "]").hide(); $(this).removeClass("checked"); } }); Now I need to check a database for the values of the main checkboxes. I get the values, and can check the boxes on page load. But when the page comes up, the sub-checkboxes are not displayed when the main checkbox is checked. Also, if the main checkbox is checked when the page loads, the sub-checkboxes are only displayed when the main chcekbox is unchecked (obviously because the above function only acts on .change()). What do you all suggest I try? If you need further explanation feel free to ask. edit: btw, all of this is in $(document).ready()

    Read the article

  • Google Chrome Extension - Help needed

    - by Jim-Y
    Im new on Google Chrome Extensions coding, and i have some basic questions. I want to make a Chrome Extension, and the scheme is the following: -a popup window, containing buttons and result fields (popup.html) -when a button is clicked, i want to trigger an event, this event should connect to a webserver (i make the servlet too), and gather information from the server. (XMLHttpRequest()) -after that, i want my extension to load the gathered information into one of the result fields. Simple, isn't it? But i have several problems, right at the beginning:( I started developing with reading tutorials, but i have fog on the main structure of an extension. Now, i started an app, containing a popup.html, manifest.json ... In popup.html theres a result field, and a button <div id="extension_container"> <div id="header"> <p id="intro">Result here</p> <button type="button" id="button">Click Me!</button> </div> <!-- END header --> <div id="content"> </div> <!-- END content --> When button is clicked, i trigger an event, handeled with jquery, code here: <script> $(document).ready(function(){ $("#button").click(function(){ $("#intro").text("Hello, im added"); alert("Clicked"); }); }); </script> And here comes the problem, in popup.html this doesnt work, if i load it to Chrome, nothing happens. Otherwise, if i open popup.html in browser, not as an extension, everything works fine. So, i think i have basic misunderstandings on extension structures, starting with background pages, background javascript and so on.. :( Could anyone help me?

    Read the article

  • Rendering LaTeX on third-party websites?

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? (The LaTeX would be enclosed within dollar signs, e.g. $\pi$. See the arXiv link above.) Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. There is one answer there, which is helpful, but perhaps Stack Overflow can provide better answers. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • Problems with jQuery plugin Tablesorter 2.0

    - by gunther-von-goetzen-sanchez
    I downloaded the jQuery plugin Tablesorter 2.0 from http://tablesorter.com/jquery.tablesorter.zip and modified the example-pager.html which is in tablesorter/docs directory I wrote additional Rollover effects: $(function() { $("table") .tablesorter({widthFixed: true, widgets: ['zebra']}) .tablesorterPager({container: $("#pager")}); $("tr").mouseover(function () { $(this).addClass('over'); }); $("tr").mouseout(function () { $(this).removeClass('over'); }); $("tr").click(function () { $(this).addClass('marked'); }); $("tr").dblclick(function () { $(this).removeClass('marked'); }); }); And of course modified the themes/blue/style.css file: /* Additional code */ table.tablesorter tbody tr.odd td { background-color: #D1D1F0; } table.tablesorter tbody tr.even td { background-color: #EFDEDE; } table.tablesorter tbody tr.over td { background-color: #FBCA33; } table.tablesorter tbody tr.marked td { background-color: #FB4133; } /* Additional code END*/ All this works fine BUT when I go to further pages i.e. page 2 3 or 4 the effects are gone! I really appreciate your help

    Read the article

  • How to convert none-Latin-based encoded text into UTF-8, or make them coexist on same page?

    - by Yallaa
    Good day, I have a script that scrapes the title/description of remote pages and prints those values into a corresponding charset=UTF-8 encoded page. Here is the problem, whenever a remote page is encoded with non-Latin characters encoding like (Arabic, Russian, Chinese, Japanese etc.) the imported values print as garbled text. I've tried passing those values through either iconv or mb_convert_encoding converters but without much success. Then, I tried detecting the remote encoding first, then change my presentation page's encoding into the remote one instead of the current utf-8, which works okay with the imported values, but the other existing utf-8 content of that language on the page gets garbled instead. Example: If I try to import those values from a Russian windows-1251 into my UTF-8 encoded page which has a mix English/Russian content. I change the imported non-utf-8 string into a utf-8 using either iconv or mb_convert_encoding. I tried: $RemoteValue = iconv($RemoteEncoding, 'UTF-8', $RemoteValue); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", $RemoteEncoding); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", "auto"); without success. If I detect that the remote page is windows-1251 encoded and I change my presentation page (which already has UTF-8 encoded mixed language content) to be similar to the remote page, then the japanese utf-8 content on the existing page gets garbled... Can 2 differently encoded strings coexist on the same page (ex. utf-8 & windows-1251)? Am I using the converters correctly? any hints as to why they don't work? Is there any better way to do this? Thank you for your help

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >