Search Results

Search found 416 results on 17 pages for 'publicly'.

Page 9/17 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • Should I indicate that the user exists or was deleted on the error page?

    - by animuson
    On an ordinary public website, the user's profile is always publicly visible to all visitors (such as Stack Overflow), where they can limit certain pieces of information via privacy settings or just removing the information. Now the user has decided to delete their account (in my case deactivate) so that their account doesn't technically "exist" anymore. The way my system is set up, when their account is deactivated, their username for any content connected to them just becomes "Anonymous User" as if it were a guest that posted. I feel like this could cause some confusion for other users. I'm also concerned about what kind of error to display when someone attempts to view their profile page. My gut tells me to just display a standard 404 page to hide the fact that they ever existed, but then you also have to consider that, since usernames must be unique, anyone can go to the register page and type in the username to see if it really exists or not. I have a similar problem with another website, which gives users the ability to hide their profiles from the public and only allow registered users to view it. Again it's with the dilemma of what kind of error message to display when an unregistered users attempts to view their profile with invalid permissions. So, would it be acceptable to display basic errors such as "user has been deactivated" or "you must be logged in to view this profile" in order to give other visitors some idea of why the page can't be displayed, or should I attempt to cover the user's privacy a little and just display a standard 404 without indicating in any way that the user might exist? Are there any other issues that I'm not realizing about either route? To go back to the beginning, should I even bother changing the user's name to "Anonymous User" when their account is deactivated? Would it be acceptable to just display a non-linked version of their username in place of the normal linked display name?

    Read the article

  • Applying for job: how to showcase work done for (private) past clients?

    - by user33445566
    I want to apply for my first "real" (read: non-freelance) Ruby on Rails job. I've built several apps already. My best work (also the most logically complicated app) was for a freelance client, and I'd like to show it to potential employers. Only problem is: it isn't online anymore. And I've lost touch with the client. How can I include this work in my portfolio? About the app: It's a Facebook game. The client's business idea for this app was not the best. It was never going to make any money. I think it was kind of a vanity side project for him. The logo and graphics are nice-looking, though, and were designed by the client. I've actually spent a lot of time recently recoding most of the app, and adding a full test suite. I want to showcase the BDD / TDD skills I've acquired. I'm not very familiar with the etiquette (/law?) concerning this situation. Can I just put my new version of the app up at a free Heroku URL (perhaps with a "credits" section, where I credit the ideas and graphic designs to my former client)? NOTE: Again, this is just to show potential employers. I am not trying to market the app as my idea, or attract any users. Can I put some or all of the code on GitHub? What if I don't put the code up publicly, but merely send a tarball to potential employers? Do I need to ask permission from my former client (and what if he says no)? The last thing I want to do is get in any legal trouble, or offend people I'm trying get a job from. But I believe that my work and experience on this app are my highest recommendation for getting a job.

    Read the article

  • Apache2: How do I restrict access to a directory, but allow access to one file within it?

    - by Nick
    I've inherited a poorly designed web app, which has a certain file that needs to be publicly accessible, but that file is inside a directory which should not. In other words, I need a way to block all files and sub-directories within a directory, but over-ride it for a single file. I'm trying this: # No one needs to access this directly <Directory /var/www/DangerousDirectory/> Order Deny,allow Deny from all # But this file is OK: <Files /var/www/DangerousDirectory/SafeFile.html> Allow from all </Files> </Directory> But it's not working- it just blocks everything including the file I want to allow. Any suggestions?

    Read the article

  • Hiding samba share from browse list for unauthorised users

    - by karlbright
    Hey Guys, I have been trying to setup my samba shares correctly. The setup i was looking for was having a couple of shares available publicly, guest accounts are ok and can browse these shares all ok. I have this setup correctly. The problem comes when setting up a share that only certain users can view, although i have setup a share that will only allow certain users to access. I havent been able to hide this share from guests. I had a look into the browseable = yes option but this hides it from everyone, including the users that have logged in. Any idea on how to tackle this? The setup i have for this private share is follows: [private] comment = private share for certain users path = /media/drive/private create mask = 0777 directory mask = 0777 writable = yes public = no users = admin

    Read the article

  • What is the best approach for deploying apps to companies

    - by Supercell
    What is the best approach for the following scenario: 1) A publicly available app (available in app stores) which is used by end users to make use of services offered by multiple companies. 2) These companies maintain their services also using a mobile app. I'm not sure on how to solve the second part. Having one app for both enduser and admin functionality, secured by username/password doesn't sound like a good idea. This would leave the only option of developing a separate admin application for the companies. What is the best approach to deploy "admin" like mobile apps to companies only, for Android, iOS and Windows Phone? Some additional information: Public App ---- Servers ----- Multiple Company Apps The public app shows all companies offering their services. An end user uses the public app to order something from a specific company. The order is sent to our servers. Our servers send the order to the associated company. This order is displayed on the company's admin app and given the option to accept the order.

    Read the article

  • Chrome: middle mouse button overriden on certain web pages?

    - by buli
    Normally when I click on a link in Chrome with a middle mouse button, the page open s in a new tab. I find it useful and would like it to always work that way. But on some sites, specifically SharePoint sites I'm often working with, middle click on certain links opens the page in the same tab. Example can be found on this publicly available SharePoint demo page when clicking on "Internet Sites" or "SharePoint resources" links with middle mouse button. Those menu items seem like a link (they definitely have URL assigned). Why do them open in the same tab instead of a new one, when middle clicked? Can this behavior be changed in Chrome's settings or using some addon?

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • Telerik RadEditor filestream/string save to RTF

    - by hoombar
    Hi, The required functionality I am aiming for is to pull out RTF content from a database, edit it through a web interface (with a WYSIWYG editor) and then place the modified text back in to the database (in RTF format). The control that I am using to do this is Telerik RadEditor (we have a license already for these controls). In the most recent version there appears to be functionality to load in RTF content from a string or a stream, but the only method I can see that is exposed for getting RTF back out is exportToRTF(); this method modified the headers and allows you to save a RTF version of the content you have just edited as a file. The functionality to convert from HTML to RTF must exist somewhere within their library as you can export a RTF file, but I can not find any publicly exposed methods to pass this in to a stream or a string. Does anybody know of a way that I can convert the HTML back to RTF using the Telerik libraries without saving out to a file? Thanks

    Read the article

  • Logging IP address for uniqueness without storing the IP address itself for privacy

    - by szabgab
    In a web application when logging some data I'd like to make sure I can identify data that came at differetn times but from the same IP address. On the other hand for privacy concerns as the data will be released publicly I'd like to make sure the actual IP cannot be retrieved. So I need some one way mapping of the IP addresses to some other strings that ensures 1-1 mapping. If I understand correctly then MD5, SHA1 or SHA256 could be a solution. I wonder if they are not too expensive in terms of processing needed? I'd be interested in any solution though if there is implementation in Perl that would be even better.

    Read the article

  • Gplv3 and consulting

    - by Mjgp2
    If you are doing a one off piece of work as a consultant, for which the client receives only binaries, can you use gpl3 software ? You are working on behalf of the client and not distributing it. From a standard gpl FAQ: Q: I am an OXID partner/reseller/consultant and I occasionally do one-time OXID eShop Community Edition customization work for clients which are derived from OXID modules. These customizations are used only for the client. Do I need to make these changes publicly available? A: No, unless your client redistributes the code in which case the client must make it available to its licensees. Is this implying it is ok?

    Read the article

  • digital signature - detached Pkcs#7 to XML-DSIG

    - by Alois
    Hi! I am struggling with the following scenario: an XML-message is created client-side and digitally signed using mozilla's window.crypto.signText. After signing, the message and the signature are transmitted via a webservice (.net) to the server. Everything is fine until this point. on the server, the XML shall be included in another XML-document, which is publicly accessible. The signature should be published as well in order to grant non-repudiation. Q: Is there a smooth option to convert the detached Pkcs#7 into XML-DSIG (e.g. functionality within the .net framework)? Q2: Or is it possible to create the XML-DSIG already client-side without using external plugins? Tnx for your help! Alois Paulin

    Read the article

  • Hashes vs Numeric id's

    - by Karan Bhangui
    When creating a web application that some how displays the display of a unique identifier for a recurring entity (videos on YouTube, or book section on a site like mine), would it be better to use a uniform length identifier like a hash or the unique key of the item in the database (1, 2, 3, etc). Besides revealing a little, what I think is immaterial, information about the internals of your app, why would using a hash be better than just using the unique id? In short: Which is better to use as a publicly displayed unique identifier - a hash value, or a unique key from the database? Edit: I'm opening up this question again because Dmitriy brought up the good point of not tying down the naming to db specific property. Will this sort of tie down prevent me from optimizing/normalizing the database in the future? The platform uses php/python with ISAM /w MySQL.

    Read the article

  • Detect language of text

    - by Nikhil
    Is there any C# library which can detect the language of a particular piece of text? i.e. for an input text "This is a sentence", it should detect the language as "English". Or for "Esto es una sentencia" it should detect the language as "Spanish". I understand that language detection from text is not a deterministic problem. But both Google Translate and Bing Translator have an "Auto detect" option, which best-guesses the input language. Is there something similar available publicly, preferably in C#?

    Read the article

  • Svn - get the list of all repos on a server so I can svnsync

    - by egarcia
    I'm attempting to create a backup of my client's existing svn repositories, which is publicly available over http. If possible, I'd like to be able to make new repositories automatically, from any computer, without having to give console access to the server to external parties (i.e. the users could do a ls on my svn repo dir) My problem is that I need to know the list of svn repositories on the server - it isn't a fixed list, since the user will add new repositories over time. I'm able to list the repositories on an html page via Apache's mod_dav_svn module, using the SVNListParentPath On directive. I got this page: http://svn.ohwr.org/ My question is: what is the easiest way to obtain a usable list of such repositories? I'll need to parse that list in order to make syncs, probably using shell commands. Must I parse the HTML with shell commands, or is there a better way to get that list?

    Read the article

  • Could an ontology suitably replace an RDBMS for a web app?

    - by Thomas
    I'm considering storing the content of a web app in an RDF or OWL ontology instead of an RDBMS. However, when I research ontologies they seem to always exist in the context of publicly accessible data stores serving as the backbone of the semantic web. I've never heard of them being used as the content engine behind a web app. Is it reasonable to use an ontology instead of an RDBMS for such an application? (Again, this is just for content. User data, commerce and stuff like that will stay in a database as I see no need to reinvent the wheel there.)

    Read the article

  • Programmatically clean Word generated HTML while preserving styles?

    - by GeReV
    In my current company, we have this decade old... let's call it a "Hello World" application. While wanting to create a newer version of it, we also want to preserve older entries. These older entries contain hideous Word generated HTML which was never filtered before. If and when we move to a newer system, I'd generally prefer to have that HTML cleaned and filtered in order to have the site comply with HTML standards as much as possible. However, just cleaning that code like Jeff Atwood described in his blog or in any other way I know of would also ruin the style and formatting. Now, that just might cause our users to revolt and then all hell will break loose... Not a very good idea. Question is -- can Word's HTML be cleaned while preserving basic formatting? (e.g: coloring, italicized, bold text and so on) Preferably using publicly available code or library, such as HTML Tidy, examples in C# would be much appreciated. Thanks!

    Read the article

  • Setting the background colour/highlight colour for a given string range using Core Text

    - by Jasarien
    I have some text laid out using Core Text in my iPhone app. I'm using NSAttributedString to set certain styles within the text for given ranges. I can't seem to find an attribute for setting a background / highlight colour, though it would seem it is possible. I couldn't find an attribute name constant that sounded relevant and the documentation only lists: kCTCharacterShapeAttributeName kCTFontAttributeName kCTKernAttributeName kCTLigatureAttributeName kCTForegroundColorAttributeName kCTForegroundColorFromContextAttributeName kCTParagraphStyleAttributeName kCTStrokeWidthAttributeName kCTStrokeColorAttributeName kCTSuperscriptAttributeName kCTUnderlineColorAttributeName kCTUnderlineStyleAttributeName kCTVerticalFormsAttributeName kCTGlyphInfoAttributeName kCTRunDelegateAttributeName Craig Hockenberry, developer of Twitterrific has said publicly on Twitter that he uses Core Text to render the tweets, and Twitterrific has this background / highlight that I'm talking about when you touch a link. Any help or pointers in the right direction would be fantastic, thanks. Edit: Here's a link to the tweet Craig posted mentioning "Core text, attributed strings and a lot of hard work", and the follow up that mentioned using CTFrameSetter metrics to work out if touches intersect with links.

    Read the article

  • asp.net MVC - how to get complete local and global resources

    - by Buthrakaur
    I'm localizing application and need to provide JSON representation of local and global resources for JS part of application for all views. My current idea is I'd implement HtmlHelper extension methods like GetLocalResourcesJSON/GetGlobalResourcesJSON which should encode all resource keys+values and return them JSON encoded as string (I'd implement caching as well). At the moment I'm able to retrieve single specific key from global or local resource belonging to current view (using httpContext.GetGlobalResourceObject/GetLocalResourceObject), but I'm not able to find out how to retrieve whole resource object and iterate all its keys+values. Is there any method how to achieve this? it looks like ResourceProviderFactory could be the the key to this problem, but it's not accessible publicly anywhere. I could instantiate ResourceExpressionBuilder and use reflection to retrieve the provider using GetLocal/GlobalResourceProvider() methods, but I don't like using reflection here at all...

    Read the article

  • Why can't sub-packages see package private classes?

    - by Polaris878
    Okay so, I have this project structure: package A.B class SuperClass (this class is marked package private) package A.B.C class SubClass (inherits from super class) I'd rather not make SuperClass publicly visible... It is really just a utility class for this specific project (A.B). It seems to me that SubClass should be able to see SuperClass, because package A.B.C is a subpackage of A.B... but this is not the case. What would be the best way to resolve this issue? I don't think it makes sense to move everything in A.B.C up to A.B or move A.B down to A.B.C... mainly because there will probably be an A.B.D which inherits from stuff in A.B as well... I'm a bit new to Java, so be nice :D (I'm a C++ and .NET guy)

    Read the article

  • What javascript min tool does jquery use?

    - by JDS
    I have a great deal of javascript that needs to be min'd before being served to the end user. Currently, I'm using JSMIN, but I'd like to switch to something a bit more powerful (such as something with local variable replacement). I'm currently looking at YUI min developed by yahoo, and it got me thinking about the min tool that jquery uses. Does anyone know what it is and if it's publicly available? Also, any recommendations on other min tools that might be better suited than YUI min? If possible, I'd like a java solution so I can just plug the library into what I've already created for the JSMIN solution. Thanks

    Read the article

  • Subscribing to MSMQ over the internet

    - by Nathan Palmer
    I haven't been able to find a clear answer to this problem. Is there a good way to subscribe to a MSMQ through the internet? Ideally I need security both in authentication and encryption for this connection. But I would like the subscriber to act just like any other client that would be subscribed on the local network. I believe I have a couple of options here Expose the MSMQ ports publicly Put the MSMQ behind some type of WCF service (not sure if that works for a subscriber) What other options do I have? We're sitting in a .NET environment and the main problem domain that is trying to be solved is to change the remote connections from a pulling system to an event based system to reduce the load on the main server.

    Read the article

  • question about frequency of updating access

    - by I__
    i have a table in an access database this access database is used on a regular basis, basically from 9-5 someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated. i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records. would it severely lock up the table for a few seconds every hour? i would like to publicly apologize for my rude comment to david fenton

    Read the article

  • serving files using django - is this a security vulnerability

    - by Tom Tom
    I'm using the following code to serve uploaded files from a login secured view in a django app. Do you think that there is a security vulnerability in this code? I'm a bit concerned about that the user could place arbitrary strings in the url after the upload/ and this is directly mapped to the local filesystem. Actually I don't think that it is a vulnerability issue, since the access to the filesystem is restricted to the files in the folder defined with the UPLOAD_LOCATION setting. UPLOAD_LOCATION = is set to a not publicly available folder on the webserver url(r'^upload/(?P<file_url>[/,.,\s,_,\-,\w]+)', 'aeon_infrastructure.views.serve_upload_files', name='project_detail'), @login_required def serve_upload_files(request, file_url): import os.path import mimetypes mimetypes.init() try: file_path = settings.UPLOAD_LOCATION + '/' + file_url fsock = open(file_path,"r") file_name = os.path.basename(file_path) file_size = os.path.getsize(file_path) print "file size is: " + str(file_size) mime_type_guess = mimetypes.guess_type(file_name) if mime_type_guess is not None: response = HttpResponse(fsock, mimetype=mime_type_guess[0]) response['Content-Disposition'] = 'attachment; filename=' + file_name #response.write(file) except IOError: response = HttpResponseNotFound() return response

    Read the article

  • Top techniques to avoid 'data scraping' from a website database

    - by Addsy
    I am setting up a site using PHP and MySQL that is essentially just a web front-end to an existing database. Understandably my client is very keen to prevent anyone from being able to make a copy of the data in the database yet at the same time wants everything publicly available and even a "view all" link to display every record in the db. Whilst I have put everything in place to prevent attacks such as SQL injection attacks, there is nothing to prevent anyone from viewing all the records as html and running some sort of script to parse this data back into another database. Even if I was to remove the "view all" link, someone could still, in theory, use an automated process to go through each record one by one and compile these into a new database, essentially pinching all the information. Does anyone have any good tactics for preventing or even just dettering this that they could share. Thanks

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >