Search Results

Search found 21551 results on 863 pages for 'company questions'.

Page 191/863 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • How can I make a custom layout / change header background color … with Tex, Latex, ConTeXt ?

    - by harobed
    Hi, currently I produce dynamically this document http://download.stephane-klein.info/exemple_document.png with Python Report Labs… to produce pdf documents. Now, I would like try to produce this document with Tex / Latex / ConTeXt… I've some questions : how can I make the layout ? how can I make header background color ? how can I define my custom title (with blue box) ? what is the better choice for my project : Latex or ConTeXt ? What package I need to use ? geometry ? fancyhdr ? Have you some example ? some resource ? Yesterday, I've read many many documentation… and I don't found a solution / example for my questions. Thanks for your help, Stephane

    Read the article

  • Get svn revision without an appropriate svn binary installed

    - by Sridhar Ratnakumar
    For some reason, we can't update the SVN in some build machines. Installed svn version is 1.3.x. But Hudson slave used 1.6 to create a checkout. This means we can't run "svn info" on those checkouts: $ svnversion subversion/libsvn_wc/questions.c:110: (apr_err=155021) svn: This client is too old to work with working copy '.'; please get a newer Subversion client $ svn info subversion/libsvn_wc/questions.c:110: (apr_err=155021) svn: This client is too old to work with working copy '.'; please get a newer Subversion client $ My question, is there a way to access the revision number without having to invoking the svn binary? You know, like trying to look into the .svn/ directory? Assume that the checkout is using latest svn version (1.6).

    Read the article

  • Web Site in solution where "Rebuild Solution" compile succeeds cannot launch debugger

    - by fordareh
    I have a solution that includes a Web Site (created using the web site template not the web app project template - converting isn't an option, btw). When I rebuild all, the compile succeeds, but strangely displays 3 errors, all of which are "Could not get dependencies for project reference 'PROJNAME'". When I try to launch the debugger, I get the "There were build errors." dialogue. Two questions: If I choose the 'Yes' option in the debug error dialogue to run the last successful build, will it run on the code that my Rebuild All just compiled? How do I resolve this issue? I checked this post and am disheartened by my prospects. What is strange, though, is that I added these same projects to a separate web site solution that compiled/debugged fine, removed the test web site and re-added the target website I would like to debug, and it failed in the same manner. Is there a secret web site .proj file for .NET web sites? http://stackoverflow.com/questions/863379/could-not-get-dependencies-for-project-reference

    Read the article

  • In Javascript event handling, why "return false" or "event.preventDefault()" and "stopping the event

    - by Jian Lin
    It is said that when we handle a "click event", returning false or calling event.preventDefault() makes a difference, in which the difference is that preventDefault will only prevent the default event action to occur, i.e. a page redirect on a link click, a form submission, etc. and return false will also stop the event flow. Does that mean, if the click event is registered several times for several actions, using $('#clickme').click(function() { … }) returning false will stop the other handlers from running? I am on a Mac now and so can only use Firefox and Chrome but not IE, which has a different event model, and tested it on FF and Chrome and all 3 handlers ran without any stopping…. so what is the real difference, or, is there a situation where "stopping the event flow" is not desirable? this is related to http://stackoverflow.com/questions/3042036/using-jquerys-animate-if-the-clicked-on-element-is-a-href-a and http://stackoverflow.com/questions/2017755/whats-the-difference-between-e-preventdefault-and-return-false

    Read the article

  • What is so bad about using SQL INNER JOIN

    - by Stephen B. Burris Jr.
    Everytime a database diagram gets looked out, one area people are critical of is inner joins. They look at them hard and has questions to see if an inner join really needs to be there. Simple Library Example: A many-to-many relationship is normally defined in SQL with three tables: Book, Category, BookCategory. In this situation, Category is a table that contains two columns: ID, CategoryName. In this situation, I have gotten questions about the Category table, is it need? Can it be used as a lookup table, and in the BookCategory table store the CategoryName instead of the CategoryID to stop from having to do an additional INNER JOIN. (For this question, we are going to ignore the changing, deleting of any CategoryNames) The question is, what is so bad about inner joins? At what point is doing them a negative thing (general guidelines like # of transactions, # of records, # of joins in a statement, etc)?

    Read the article

  • What's the best way to explain parsing to a new programmer?

    - by Daisetsu
    I am a college student getting my Computer Science degree. A lot of my fellow students really haven't done a lot of programming. They've done their class assignments, but let's be honest here those questions don't really teach you how to program. I have had several other students ask me questions about how to parse things, and I'm never quite sure how to explain it to them. Is it best to start just going line by line looking for substrings, or just give them the more complicated lecture about using proper lexical analysis, etc. to create tokens, use BNF, and all of that other stuff? They never quite understand it when I try to explain it. What's the best approach to explain this without confusing them or discouraging them from actually trying.

    Read the article

  • Python: Slicing a list into n nearly-equal-length partitions

    - by Drew
    I'm looking for a fast, clean, pythonic way to divide a list into exactly n nearly-equal partitions. partition([1,2,3,4,5],5)->[[1],[2],[3],[4],[5]] partition([1,2,3,4,5],2)->[[1,2],[3,4,5]] (or [[1,2,3],[4,5]]) partition([1,2,3,4,5],3)->[[1,2],[3,4],[5]] (there are other ways to slice this one too) There are several answers in here http://stackoverflow.com/questions/1335392/iteration-over-list-slices that run very close to what I want, except they are focused on the size of the list, and I care about the number of the lists (some of them also pad with None). These are trivially converted, obviously, but I'm looking for a best practice. Similarly, people have pointed out great solutions here http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python for a very similar problem, but I'm more interested in the number of partitions than the specific size, as long as it's within 1. Again, this is trivially convertible, but I'm looking for a best practice.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Manual testing Vs Automated testing

    - by mgj
    Respected all, As many know testing can be mainly classified into manual and automated testing. With regard to this certain questions come to mind. Hope you can help... They include: What is the basic difference between the two types of testing? What are the elements of challenges involved in both manual and automated testing? What are the different skill sets required by a software tester for manual and automated testing respectively? What are the different job prospects and growth opportunities among software testers who do manual testing automated testing respectively? Is manual testing under rated to automated testing in anyway(s)? If yes, kindly specify the way. How differently are the manual testers treated in comparison to automated testers in the corporate world?( If they truly are differentiated in any terms as such ) I hope you can share your knowledge in answering these questions.. Thank you for your time..:)

    Read the article

  • Running your own GAE server

    - by h2g2java
    The question http://stackoverflow.com/questions/2505265/how-difficult-is-it-to-migrate-away-from-google-app-engine triggered me to think about this issue again. I have read of someone running, production-wise, Google app engine development version on their own server. My questions are: Are there any security issues running GAE development on your own server in production mode and exposing it to the www? If so how to mitigate them? Can GAE dev be run on Amazon? Is it possible to port my GAE apps running on Google servers to a GAE running on Amazon, without code changes, but without changing any reference in using other gdata services such as google docs, youtube, gmail, etc. How to configure GAE dev server to use my own hadoop? Or to use Amazon's hadoop?

    Read the article

  • Thoughts on moving to Maven in an enterprise environment

    - by Josh Kerr
    I'm interested in hearing from those who either A) use Maven in an enterprise environment or B) tried to use Maven in an enterprise environment. I work for a large company that is contemplating bringing in Maven into our environment. Currently we use OpenMake to build/merge and home-grown software to deploy code to 100+ servers running various platforms (eg. WAS and JBoss). OpenMake works fine for us however Maven does have some ideal features, most importantly being dependency management, but is it viable in a large environment? Also what headaches have/did you incur, if any, in maintaining a Maven environment. Side note, I've read http://stackoverflow.com/questions/861382/why-does-maven-have-such-a-bad-rep, http://stackoverflow.com/questions/303853/what-are-your-impressions-of-maven, and a few other posts. It's interesting seeing the split between developers.

    Read the article

  • how to connect java and mysql using mysql connector java 5.1.12

    - by user225269
    I'm still a beginner in java. I dont have any idea on how to import the files that I have downloaded into my java class. Its in this path: E:\Users\user\Downloads\mysql-connector-java-5.1.12 I don't know what to do with the files I extracted from the file that I have downloaded in the mysql site for me to connect my java application and mysql database. I'm using Netbeans 6.8. And have also installed wampserver. ive already check out This: http://stackoverflow.com/questions/2118369/java-trouble-connecting-to-mysql and this: http://stackoverflow.com/questions/1640910/connecting-to-a-mysql-database But they don't seem to have answers on how to make use of the mysql java connector file from mysql site. Please help, thanks.

    Read the article

  • CodeGolf: Brothers

    - by John McClane
    Hi guys, I just finished participating in the 2009 ACM ICPC Programming Conest in the Latinamerican Finals. These questions were for Brazil, Bolivia, Chile, etc. My team and I could only finish two questions out of the eleven (not bad I think for the first try). Here's one we could finish. I'm curious to seeing any variations to the code. The question in full: ps: These questions can also be found on the official ICPC website available to everyone. In the land of ACM ruled a greeat king who became obsessed with order. The kingdom had a rectangular form, and the king divided the territory into a grid of small rectangular counties. Before dying the king distributed the counties among his sons. The king was unaware of the rivalries between his sons: The first heir hated the second but not the rest, the second hated the third but not the rest, and so on...Finally, the last heir hated the first heir, but not the other heirs. As soon as the king died, the strange rivaly among the King's sons sparked off a generalized war in the kingdom. Attacks only took place between pairs of adjacent counties (adjacent counties are those that share one vertical or horizontal border). A county X attacked an adjacent county Y whenever X hated Y. The attacked county was always conquered. All attacks where carried out simultanously and a set of simultanous attacks was called a battle. After a certain number of battles, the surviving sons made a truce and never battled again. For example if the king had three sons, named 0, 1 and 2, the figure below shows what happens in the first battle for a given initial land distribution: INPUT The input contains several test cases. The first line of a test case contains four integers, N, R, C and K. N - The number of heirs (2 <= N <= 100) R and C - The dimensions of the land. (2 <= R,C <= 100) K - Number of battles that are going to take place. (1 <= K <= 100) Heirs are identified by sequential integers starting from zero. Each of the next R lines contains C integers HeirIdentificationNumber (saying what heir owns this land) separated by single spaces. This is to layout the initial land. The last test case is a line separated by four zeroes separated by single spaces. (To exit the program so to speak) Output For each test case your program must print R lines with C integers each, separated by single spaces in the same format as the input, representing the land distribution after all battles. Sample Input: Sample Output: 3 4 4 3 2 2 2 0 0 1 2 0 2 1 0 1 1 0 2 0 2 2 2 0 0 1 2 0 0 2 0 0 0 1 2 2 Another example: Sample Input: Sample Output: 4 2 3 4 1 0 3 1 0 3 2 1 2 2 1 2

    Read the article

  • In a multithreaded app, would a multi-core or multiprocessor arrangement be better?

    - by Michael
    I've read a lot on this topic already both here (e.g., stackoverflow.com/questions/1713554/threads-processes-vs-multithreading-multi-core-multiprocessor-how-they-are or http://stackoverflow.com/questions/680684/multi-cpu-multi-core-and-hyper-thread) and elsewhere (e.g., ixbtlabs.com/articles2/cpu/rmmt-l2-cache.html or software.intel.com/en-us/articles/multi-core-introduction/), but I still am not sure about a couple things that seem very straightforward. So I thought I'd just ask. (1) Is a multi-core processor in which each core has dedicated cache effectively the same as a multiprocessor system (balanced of course for processor speed, cache size, and so on)? (2) Let's say I have some images to analyze (i.e., computer vision), and I have these images loaded into RAM. My app spawns a thread for each image that needs to be analyzed. Will this app on a shared cache multi-core processor run slower than on a dedicated cache multi-core processor, and would the latter run at the same speed as on an equivalent single-core multiprocessor machine? Thank you for the help!

    Read the article

  • Domain driven design value object, how to ensure a unique value

    - by Darren
    Hi, I am building a questionnaire creator. A questionnaire consists of sections, sections consist of pages and pages consist of questions. Questionnaire is the aggregate root. Sections, pages and questions can have what are called shortcodes which should be unique within a questionnaire (but not unique within the database hence they are not strictly an identity). I intended to make the shortcode a value object and wanted to include the business rule that it should be unique within the questionnaire but I am unsure how to ensure that. My understanding is that the value object should not access the repository or service layer so how does it find out if it is unique? Thanks for any help. Darren

    Read the article

  • What happened to Perl?

    - by llasa
    I will try to keep this as objective as possible. I've been dealing with PHP since 3 years know, I have always known of Perl but never really "dived" into it. So I took a look at some Perl code examples and I thought: Wow, It's like PHP just failed at cloning it. My questions are: What is bad about Perl? What are the disadvantages that made it so extremely unpopular so that it is actually dying right know? Why could PHP take over? What does PHP have (or what did it have in the times of PHP4) that made it rise in popularity compared to Perl? I'm rather young and the questions above are a bit subjective and I think you can only really answer them when you have experienced the rise of PHP along with the fall of Perl. Unless my question before I hope that this one here can be more or less completely answered. There have to be definite disadvantages Perl has compared to PHP that made it fall.

    Read the article

  • Sef-packed training kit practice tests

    - by Costa
    Hi I have a book called .Net framework 2.0 application development foundation, self-packed training kit by Tony Northup and Shawn wildermuth. the book CD contains practice tests, Can I rely on this CD to take the exam, or it will be just like the book itself, a wast of money? someone rely on it and success? I am not talking about memorizing or cheating the exam I am talking about studying and practice it. Also When I visit MS website they did not determine the number of questions, type of questions, or the time, someone have this info? Thanks

    Read the article

  • How to get full control of umask/PAM/permissions?

    - by plua
    OUR SITUATION Several people from our company log in to a server and upload files. They all need to be able to upload and overwrite the same files. They have different usernames, but are all part of the same group. However, this is an internet server, so the "other" users should have (in general) just read-only access. So what I want to have is these standard permissions: files: 664 directories: 771 My goal is that all users do not need to worry about permissions. The server should be configured in such a way that these permissions apply to all files and directories, newly created, copied, or over-written. Only when we need some special permissions we'd manually change this. We upload files to the server by SFTP-ing in Nautilus, by mounting the server using sshfs and accessing it in Nautilus as if it were a local folder, and by SCP-ing in the command line. That basically covers our situation and what we aim to do. Now, I have read many things about the beautiful umask functionality. From what I understand umask (together with PAM) should allow me to do exactly what I want: set standard permissions for new files and directories. However, after many many hours of reading and trial-and-error, I still do not get this to work. I get many unexpected results. I really like to get a solid grasp of umask and have many question unanswered. I will post these questions below, together with my findings and an explanation of my trials that led to these questions. Given that many things appear to go wrong, I think that I am doing several things wrong. So therefore, there are many questions. NOTE: I am using Ubuntu 9.10 and therefore can not change the sshd_config to set the umask for the SFTP server. Installed SSH OpenSSH_5.1p1 Debian-6ubuntu2 < required OpenSSH 5.4p1. So here go the questions. 1. DO I NEED TO RESTART FOR PAM CHANGS TO TAKE EFFECT? Let's start with this. There were so many files involved and I was unable to figure out what does and what does not affect things, also because I did not know whether or not I have to restart the whole system for PAM changes to take effect. I did do so after not seeing the expected results, but is this really necessary? Or can I just log out from the server and log back in, and should new PAM policies be effective? Or is there some 'PAM' program to reload? 2. IS THERE ONE SINGLE FILE TO CHANGE THAT AFFECTS ALL USERS FOR ALL SESSIONS? So I ended up changing MANY files, as I read MANY different things. I ended up setting the umask in the following files: ~/.profile -> umask=0002 ~/.bashrc -> umask=0002 /etc/profile -> umask=0002 /etc/pam.d/common-session -> umask=0002 /etc/pam.d/sshd -> umask=0002 /etc/pam.d/login -> umask=0002 I want this change to apply to all users, so some sort of system-wide change would be best. Can it be achieved? 3. AFTER ALL, THIS UMASK THING, DOES IT WORK? So after changing umask to 0002 at every possible place, I run tests. ------------SCP----------- TEST 1: scp testfile (which has 777 permissions for testing purposes) server:/home/ testfile 100% 4 0.0KB/s 00:00 Let's check permissions: user@server:/home$ ls -l total 4 -rwx--x--x 1 user uploaders 4 2011-02-05 17:59 testfile (711) ---------SSH------------ TEST 2: ssh server user@server:/home$ touch anotherfile user@server:/home$ ls -l total 4 -rw-rw-r-- 1 user uploaders 0 2011-02-05 18:03 anotherfile (664) --------SFTP----------- Nautilus: sftp://server/home/ Copy and paste newfile from client to server (777 on client) TEST 3: user@server:/home$ ls -l total 4 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 newfile (777) Create a new file through Nautilus. Check file permissions in terminal: TEST 4: user@server:/home$ ls -l total 4 -rw------- 1 user uploaders 0 2011-02-05 18:06 newfile (600) I mean... WHAT just happened here?! We should get 644 every single time. Instead I get 711, 777, 600, and then once 644. And the 644 is only achieved when creating a new, blank file through SSH, which is the least probable scenario. So I am asking, does umask/pam work after all? 4. SO WHAT DOES IT MEAN TO UMASK SSHFS? Sometimes we mount a server locally, using sshfs. Very useful. But again, we have permissions issues. Here is how we mount: sshfs -o idmap=user -o umask=0113 user@server:/home/ /mnt NOTE: we use umask = 113 because apparently, sshfs starts from 777 instead of 666, so with 113 we get 664 which is the desired file permission. But what now happens is that we see all files and directories as if they are 664. We browse in Nautilus to /mnt and: Right click - New File (newfile) --- TEST 5 Right click - New Folder (newfolder) --- TEST 6 Copy and paste a 777 file from our local client --- TEST 7 So let's check on the command line: user@client:/mnt$ ls -l total 8 -rw-rw-r-- 1 user 1007 3 Feb 5 18:05 copyfile (664) -rw-rw-r-- 1 user 1007 0 Feb 5 18:15 newfile (664) drw-rw-r-- 1 user 1007 4096 Feb 5 18:15 newfolder (664) But hey, let's check this same folder on the server-side: user@server:/home$ ls -l total 8 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 copyfile (777) -rw------- 1 user uploaders 0 2011-02-05 18:15 newfile (600) drwx--x--x 2 user uploaders 4096 2011-02-05 18:15 newfolder (711) What?! The REAL file permissions are very different from what we see in Nautilus. So does this umask on sshfs just create a 'filter' that shows unreal file permissions? And I tried to open a file from another user but the same group that had real 600 permissions but 644 'fake' permissions, and I could still not read this, so what good is this filter?? 5. UMASK IS ALL ABOUT FILES. BUT WHAT ABOUT DIRECTORIES? From my tests I can see that the umask that is being applied also somehow influences the directory permissions. However, I want my files to be 664 (002) and my directories to be 771 (006). So is it possible to have a different umask for directories? 6. PERHAPS UMASK/PAM IS REALLY COOL, BUT UBUNTU IS JUST BUGGY? On the one hand, I have read topics of people that have had success with PAM/UMASK and Ubuntu. On the other hand, I have found many older and newer bugs regarding umask/PAM/fuse on Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gdm/+bug/241198 https://bugs.launchpad.net/ubuntu/+source/fuse/+bug/239792 https://bugs.launchpad.net/ubuntu/+source/pam/+bug/253096 https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/549172 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=314796 So I do not know what to believe anymore. Should I just give up? Would ACL solve all my problems? Or do I have again problems using Ubuntu? One word of caution with backups using tar. Red Hat /Centos distributions support acls in the tar program but Ubuntu does not support acls when backing up. This means that all acls will be lost when you create a backup. I am very willing to upgrade to Ubuntu 10.04 if that would solve my problems too, but first I want to understand what is happening.

    Read the article

  • JSON - msg.d is undefined error

    - by Narmatha Balasundaram
    Hi - Webmethod returns an array of objects - something like this {"d": [[{"Amount":100,"Name":"StackOverflow"}, {"Amount":200,"Name":"Badges"}, {"Amount":300,"Name":"Questions"}]]} On the client-side, when the JSON is referenced using msg.d, I get a msg.d is undefined error. I am using jQuery JavaScript Library v1.4.2 How do I access the elements in the array of objects? Adding more findings, code and questions: I don't see __type in the JSON object that is returned. Does that mean that the object sent from the server is not JSON formatted? When the __type is not a part of the response, I will not be able to use msg.d? (msg.d is undefined) Some more: 1. I can access the elements from client side using msg[0][0].Amount - How can I specifically JSON format my return object (from the server)

    Read the article

  • Fuzzy Regex, Text Processing, Lexical Analysis?

    - by justinzane
    I'm not quite sure what terminology to search for, so my title is funky... Here is the workflow I've got: Semi-structured documents are scanned to file. The files are OCR'd to text. The text is parsed into Python objects The objects are serialized (to SQL, JSON, whatever) for use. The documents are structures like this: HEADER blah blah, Page ### blah Garbage text... 1. Question Text... continued until now. A. Choice text... adsadsf. B. Another Choice... 2. Another Question... I need to extract the questions and choices. The problem is that, because the text is OCR output, there are occasional strange substitutions like '2' - 'Z' which makes ordinary regular expressions useless. I've tried the Levenshtein module and it helps, but it requires prior knowledge of what edit distance is to be expected. I don't know whether I'm looking to create a parser? a lexer? something else? This has lead me down all kinds of interesting but nonrelevant paths. Guidance would be greatly appreciated. Oh, also, the text is generally from specific technical domains, so general spelling tools are not so helpful. Regarding the structure of the documents, there is no clear visual pattern -- like line breaks or indentation -- with the exception of the fact that "questions" usually begin a line. Crap on the document can cause characters to appear before the actual beginning of the line, which means that something along the lines of r'^[0-9]+' does not reliably work. Though the "questions" always begin with an int, a period and a space; the OCR can substitute other characters or skip characters. This is not so much a problem with Tesseract or Cunieform, rather with the poor quality of the paper documents. # Note: for the project in question, it was decided that having a human prep the OCR'd text was better that spending the time coding a solution. I'd still love good pointers, however.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >