Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 546/1016 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • compile error in Ubuntu 10

    - by yozloy
    Hey guys I got a vps which run solusVM. I'm now trying to install ruby 1.9.2 in it. I follow this guide: after I run this command apt-get update apt-get -y install build-essential zlib1g zlib1g-dev libxml2 libxml2-dev libxslt-dev I got this error below root@makserver:/usr/local/src/ruby-1.9.2-p0# apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies... Done The following extra packages will be installed: libc6 Suggested packages: glibc-doc The following packages will be upgraded: libc6 1 upgraded, 0 newly installed, 0 to remove and 80 not upgraded. Need to get 0B/4252kB of archives. After this operation, 4096B disk space will be freed. Do you want to continue [Y/n]? y debconf: apt-extracttemplates failed: Bad file descriptor (Reading database ... 21594 files and directories currently installed.) Preparing to replace libc6 2.11.1-0ubuntu7.2 (using .../libc6_2.11.1-0ubuntu7.8_amd64.deb) ... open2: fork failed: Cannot allocate memory at /usr/share/perl5/Debconf/ConfModule.pm line 59 dpkg: error processing /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 12 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Anybody can tell me how can I correct this. Thanks

    Read the article

  • PerlRegEx vs RegularExpressionsCore Delphi Units

    - by Jan Goyvaerts
    The RegularExpressionsCore unit that is part of Delphi XE is based on the latest class-based PerlRegEx unit that I developed. Embarcadero only made a few changes to the unit. These changes are insignificant enough that code written for earlier versions of Delphi using the class-based PerlRegEx unit will work just the same with Delphi XE. The unit was renamed from PerlRegEx to RegularExpressionsCore. When migrating your code to Delphi XE, you can choose whether you want to use the new RegularExpressionsCore unit or continue using the PerlRegEx unit in your application. All you need to change is which unit you add to the uses clause in your own units. Indentation and line breaks in the code were changed to match the style used in the Delphi RTL and VCL code. This does not change the code, but makes it harder to diff the two units. Literal strings in the unit were separated into their own unit called RegularExpressionsConsts. These strings are only used for error messages that indicate bugs in your code. If your code uses TPerlRegEx correctly then the user should not see any of these strings. My code uses assertions to check for out of bounds parameters, while Embarcadero uses exceptions. Again, if you use TPerlRegEx correctly, you should never get any assertions or exceptions. The Compile method raises an exception if the regular expression is invalid in both my original TPerlRegEx component and Embarcadero’s version. If your code allows the user to provide the regular expression, you should explicitly call Compile and catch any exceptions it raises so you can tell the user there is a problem with the regular expression. Even with user-provided regular expressions, you shouldn’t get any other assertions or exceptions if your code is correct. Note that Embarcadero owns all the rights to their RegularExpressionsCore unit. Like all the other RTL and VCL units, this unit cannot be distributed by myself or anyone other than Embarcadero. I do retain the rights to my original PerlRegEx unit which I will continue to make available for those using older versions of Delphi.

    Read the article

  • How can Windows defragmentation tools cause internal fragmentation in SQL Server?

    - by Martin
    I was just reading this article where the author talks about the file system fragmentation that can be caused by growing database files. There was one bit that I didn't quite follow. What about Windows defragmentation tools? Although you can use a Windows defragmentation tool to defragment your database files, these tools simply move chunks of files around to get them contiguous. This moving of chunks of files can cause internal fragmentation that you might not be able to resolve easily. Is the author saying here that the disc defragmenter makes no attempt to put the chunks of files in the correct sequence or have I misunderstood? If he is saying that then is this a limitation of all disc defragmenter utilities - even commercial ones?

    Read the article

  • Harnessing Business Events for Predictive Decision Making - part 1 / 3

    - by Sanjeev Sharma
    Businesses have long relied on data mining to elicit patterns and forecast future demand and supply trends. Improvements in computing hardware, specifically storage and compute capacity, have significantly enhanced the ability to store and analyze mountains of data in ever shrinking time-frames. Nevertheless, the reality is that data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months.Faced with this data explosion, businesses are exploring means to develop human brain-like capabilities in their decision systems (including BI and Analytics) to make sense of the data storm, in other words business events, in real-time and respond pro-actively rather than re-actively. It is more like having a little bit of the right information just a little bit before hand than having all of the right information after the fact. To appreciate this thought better let's first understand the workings of the human brain.Neuroscience research has revealed that the human brain is predictive in nature and that talent is nothing more than exceptional predictive ability. The cerebral-cortex, part of the human brain responsible for cognition, thought, language etc., comprises of five layers. The lowest layer in the hierarchy is responsible for sensory perception i.e. discrete, detail-oriented tasks whereas each of the above layers increasingly focused on assembling higher-order conceptual models. Information flows both up and down the layered memory hierarchy. This allows the conceptual mental-models to be refined over-time through experience and repetition. Secondly, and more importantly, the top-layers are able to prime the lower layers to anticipate certain events based on the existing mental-models thereby giving the brain a predictive ability. In a way the human brain develops a "memory of the future", some sort of an anticipatory thinking which let's it predict based on occurrence of events in real-time. A higher order of predictive ability stems from being able to recognize the lack of certain events. For instance, it is one thing to recognize the beats in a music track and another to detect beats that were missed, which involves a higher order predictive ability.Existing decision systems analyze historical data to identify patterns and use statistical forecasting techniques to drive planning. They are similar to the human-brain in that they employ business rules very much like mental-models to chunk and classify information. However unlike the human brain existing decision systems are unable to evolve these rules automatically (AI still best suited for highly specific tasks) and  predict the future based on real-time business events. Mistake me not,  existing decision systems remain vital to driving long-term and broader business planning. For instance, a telco will still rely on BI and Analytics software to plan promotions and optimize inventory but tap into business events enabled predictive insight to identify specifically which customers are likely to churn and engage with them pro-actively. In the next post, i will depict the technology components that enable businesses to harness real-time events and drive predictive decision making.

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • How do I fix "Setup did not find any hard disk drives installed in your computer" error during Win X

    - by CT
    I just bought a nettop. It came with WinXP Home. I first installed Win 7 on it. I wasn't that happy with the performance so I decided to go back to XP. I am using an external dvd drive and a Win XP Pro disc. I boot from the dvd drive and during the install get this error: Setup did not find any hard disk drives installed in your computer. Make sure any hard disk drives are powered on and properly connected to your computer, and that any disk-related hardware configuration is correct. This may involve running a manufacturer-supplied diagnostic or setup program. Setup cannot continue. To quit Setup, press F3. This is the nettop in question: http://www.newegg.com/Product/Product.aspx?Item=N82E16883103228

    Read the article

  • EM CLI, diving in and beyond!

    - by Maureen Byrne
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Doing more in less time… Isn’t that what we all strive to do? With this in mind, I put together two screen watches on Oracle Enterprise Manager 12c command line interface, or EM CLI as it is also known. There is a wealth of information on any topic that you choose to read about, from manual pages to coding documents…might I even say blog posts? In our busy lives it is so nice to just sit back with a short video, watch and learn enough to dive in. Doing more in less time, is the essence of EM CLI. It enables you to script fundamental and complex administrative tasks in an elegant way, thanks to the Jython scripting language. Repetitive tasks can be scripted and reused again and again. Sure, a Graphical User Interface provides a more intuitive step by step approach to tasks, and it provides a way of quickly becoming familiar with a product and its many features, and it is definitely the way to go when viewing performance data and historical trending…but for repetitive and complex tasks, scripting is the way to go! Lets us take the everyday task of creating an administrator. Using EM CLI in interactive mode the command could look like this.. emcli>create_user(name='jan.doe', type='EXTERNAL_USER') This command creates an administrator called jan.doe which is an externally authenticated user, possibly LDAP or SSO, defined by the EXTERNAL_USER tag. The create_user procedure takes many arguments; see the documentation for more information. Now, where EM CLI really shines and shows power is in creating multiple users. Regardless of the number, tens or thousands, the effort is the same. With the use of a standard programming construct, a loop, you can place your create_user() procedure within it. Using a loop allows you to iterate through a previously created list, creating new users until the list is complete. Using EM CLI in Script mode, your Jython loop would look something like this… for user in list_of_users:       create_user(name=user, expire=’true’, password=’welcome123’) This Jython code snippet iterates through a previously defined list of names, list_of_users, and iterates through the list, taking each name, user in this case, and creates an administrator sets the password to welcome123, but forces the user to reset it when they first login. This is only one of over four hundred procedures created to expose Oracle Enterprise Manager 12c functionality in a powerful and programmatic way. It is a few months since we released EM CLI with scripting option. We are seeing many users adapt to this fun and powerful way of using Oracle Enterprise Manager 12c. What are the first steps? Watch these screen watches, and dive in. The first screen watch steps you through where and how to download and install and how to run your first few commands. The Second screen watch steps you through a few scripts. Next time, I am going to show you the basic building blocks to writing a Jython script to perform Oracle Enterprise Manager 12c administrative tasks. Join this growing group of EM CLI users…. Dive in! Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • System testing - making sure the system conforms to specification. Validation?

    - by user970696
    After weeks of research I have nearly completed my thesis, yet I am unable to clear up my confusion contained in all previous threads here (and in many books): During system testing, we check the system function against system analysis (functional system design) - but that would fit to a definition of verification according to many books. But I follow ISO12207, which considers all testing as validation (making sure work product meets requirement for intended use). How can I justify that unit testing or system testing is validation, even though when I check it against specification? Which fullfils the definiton of verification? When testing that e.g. "Save button" works, is it validation? This picture shows my understanding of V&V, so different from many other sources, including ISTQB etc. Essential problem I have is that a book using the same picture also states on another place that: test activities in the area of validation are usability, alpha and beta testing. For verification, testable system requirements are defined whose correct implementation can be tested through system tests. Isn't that the opposite of what the picture says? Most books present the following picture, where validation is just making sure that customer needs are satisfied. Mind you that according to ISO, validation activity is testing.

    Read the article

  • install libreoffice in Ubuntu 12.04 is impossible

    - by user1587239
    What is wrong with Ubuntu repositories? sudo apt-get install libreoffice Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libreoffice : Depends: libreoffice-core (= 1:3.5.4-0ubuntu1.1) but it is not going to be installed Depends: libreoffice-writer but it is not going to be installed Depends: libreoffice-calc but it is not going to be installed Depends: libreoffice-impress but it is not going to be installed Depends: libreoffice-draw but it is not going to be installed Depends: libreoffice-math but it is not going to be installed Depends: libreoffice-base but it is not going to be installed Depends: libreoffice-filter-mobiledev but it is not going to be installed Depends: libreoffice-java-common (>= 1:3.5.4~) but it is not going to be installed Recommends: libreoffice-gnome but it is not going to be installed or libreoffice-kde but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Can I disable Pam Loginuid? Can I find out options used to configure kernel?

    - by dunxd
    I am getting a lot of the following types of error in my secure log on a CentOS 5.4 server: crond[10445]: pam_loginuid(crond:session): set_loginuid failed opening loginuid sshd[10473]: pam_loginuid(sshd:session): set_loginuid failed opening loginuid I've seen discussion of this being caused when using a non-standard kernel without the correct CONFIG_AUDIT and CONFIG_AUDITSYSCALL options set. Where this is the case, it is advised to comment out some lines in the pam.d config files. I am running a Virtual Private Server where I need to use the kernel provided by the supplier. Is there a way to find out what options they used to configure the kernel? I want to verify if the above is the cause. If this turns out not to be the cause, what are the risk of disabling pam_loginuid for crond and sshd?

    Read the article

  • Can't get PDO MySQL driver to work on PHP

    - by bart
    Trying to install Vanilla 2 locally using MAMP i got the error: "You must have the MySQL driver for PDO enabled in order for Vanilla to connect to your database". When I check phpinfo() I see: --with-pdo-mysql=shared,/Applications/MAMP/Library --with-pdo-pgsql=shared,/Applications/MAMP/Library/pg When I go and check out those paths I find the files: libpq.5.dylib libpq.dylib libpq.5.2.dylib When I check my php.ini file I see: ; Extensions extension=pdo_mysql.so In php.ini the path to the extension dir is correct (checked it manually): extension_dir = "/Applications/MAMP/bin/php5.3/lib/php/extensions/no-debug-non-zts-20090626/" In this folder I find the file: pdo_mysql.so phpinfo() gives me two sections: PDO PDO drivers: sqlite, sqlite2 and pdo_sqlite SQLite Library: 3.6.22 So everything seems to be fine, but can't get the PDO MySQL driver working :(

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Adaptive Layout for ADF Faces on Tablets

    - by Shay Shmeltzer
    In the 11.1.16 version of Oracle ADF we started adding specific features to the ADF Faces components so they'll work better on iPad tablets. In this entry I'm going to highlight some new capabilities that we have added to the 11.1.2.3 release. (note if you are still on the 11.1.1.* branch - you'll need to wait for 11.1.1.7 to get the features discussed here). The two key additions in the 11.1.2.3 version compared to the 11.1.1.6 features for iPad support include: pagination for tables and adaptive flow layout. The pagination for table is self explanatory, basically since iPad don't support scroll bars, we automatically switch the table component to render with a pagination toolbar that allow you to scroll set of records or directly jump to a specific set. See the image below. The adaptive flow layout takes a bit more explanation. On regular desktops the UI that you usually build for ADF Faces screens is going to use stretch layout - meaning that it stretches to fill the whole area of the browser window. If you resize the browser windoe, the ADF Faces page resizes with it. If your browser window is too small, scroll bars will appear to allow you to scroll to areas that are "hidden". However on an iPad, this is probably not the type of layout you want - you would rather have a flow layout that eliminates scroll bars and instead allows you to scroll down the page. Basically your want the page to be sized based on its content, rather then based on the browser window size. In ADF Faces terminology this can be done with the dimensionsFrom property set to "children". And here comes the tricky part, since in the past(and also today) when you create an ADF Faces page and add a stretchable component to it, the dimensionsFrom property is set to parent by default. This will be true to other layout components you'll add as well. At this point you might be wondering "Does this mean I'll need to go to each of the layout components in my page and modify the dimensionsFrom property value to be children?" ADF Faces to the rescue... To eliminate the need to do this tedious manual changes, we introduced a new web.xml parameter "oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS" You'll basically add the following to your web.xml <context-param>    <description>      This parameter controls the default value for component geometry on the page.      Supported values are:        legacy - component attributes use the default values as specified for the attributes                 in the tag documentation (default value)        auto   - component attributes use the correct default value given the value of their                 parent component. For example, with this setting, the panelStretchLayout                 will use "auto" as the default value for its "dimensionsFrom" attribute                 instead of "parent".    </description>    <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>    <param-value>auto</param-value>  </context-param> Once you set this parameter, you only need to set the dimensionsFrom attribute for the top level layout component on your page, and the rest of the components will adjust accordingly. One trick that you can use, and that is used in the demo below, is to have the dimensionsFrom property depend on the type of client that access your application. This way you can switch between stretch or flow layout based on the device accessing your application. For example I use the following in my page: <af:panelStretchLayout topHeight="70px" startWidth="0px" endWidth="0px"                                       dimensionsFrom="#{adfFacesContext.agent.capabilities['touchScreen'] eq 'none'  ? 'parent' : 'children' }"> Which results in a flow layout for iPads and a stretch layout for regular browsers. Check out the result in the below demo: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • Why have minimal user/handwritten code and do everything in XAML?

    - by Mrk Mnl
    I feel the MVVM community has become overzealous like the OO programmers in the 90's - it is a misnomer MVVM is synonymous with no code. From my closed StackOverflow question: Many times I come across posts here about someone trying to do the equivalent in XAML instead of code behind. Their only reason being they want to keep their code behind 'clean'. Correct me if I am wrong, but is not the case that: XAML is compiled too - into BAML - then at runtime has to be parsed into code anyway. XAML can potentially have more runtime bugs as they will not be picked up by the compiler at compile time - from incorrect spellings - these bugs are also harder to debug. There already is code behind - like it or not InitializeComponent(); has to be run and the .g.i.cs file it is in contains a bunch of code though it may be hidden. Is it purely psychological? I suspect it is developers who come from a web background and like markup as opposed to code. EDIT: I don't propose code behind instead of XAML - use both - I prefer to do my binding in XAML too - I am just against making every effort to avoid writing code behind esp in a WPF app - it should be a fusion of both to get the most out of it. UPDATE: Its not even Microsoft's idea, every example on MSDN shows how you can do it in both.

    Read the article

  • Exchange 2010 Cmdlets for Sent Items in a Distribution group

    - by Jimmy Jones
    Here is the scenario that I am in and am stuck on how to get this the correct way. What I'm looking for is a syntax that will provide me with statistics on what users have emailed "Sent" for the day. I would like to know get information on what all users of a specific distribution group has emailed for the day. I have tried the following to no avail. Get-Mailbox | Get-MailboxFolderStatistics -FolderScope SentItems | Where {$_.ItemsInFolder -gt 0} | -Start "06/14/2012 9:00AM" -End "06/14/2012 5:00PM" | Sort-Object -Property ItemsInFolder -Descending | select-object Identity,ItemsInFolder | export-csv c:\test.txt Get-MessageTrackingLog -Start "06/14/2012 9:00AM" -End "06/14/2012 5:00PM" -Sender "" | measure-object - This one will only work on specified users, but I need to check the whole group. If anyone could help me out. Thank you!!!

    Read the article

  • While Mail Forwarding with exim, how do I rewrite the To header with true destination address

    - by Jom
    I have mail forwarding setup with exim using a domain forwarding file. virtual_aliases_nostar: driver = redirect allow_defer allow_fail data = ${if exists{/etc/valiases/$domain}{${lookup{$local_part@$domain}lsearch{/etc/valiases/$domain}}}} file_transport = address_file group = mail pipe_transport = virtual_address_pipe retry_use_local_part domains = lsearch;/etc/localdomains unseen It is working fine. However, I would like to rewrite the "to" header. In my system filter, I would like to put something like: headers remove to headers add "To: $recipient:" I've tried: headers remove to headers add "To: $recipient:" headers remove to headers add "To: $h_env-to:" headers remove to headers add "To: $env-to:" The intent is to have the end recipient see their own email address in the To: line of their mail client. I can't seem to figure out what the correct header is for the final destination of the email so that I can put it in the to header. I've read through the Exim docs and can't seem to find it. I've also looked in the headers in an email at a mail client and can't see it there either. Any suggestions would be appreciated.

    Read the article

  • How to associate all file types within Wine with its corresponding native application?

    - by MestreLion
    This is easily done for a single file type, as answered in How to associate a file type within Wine with a native application?, by creating a .reg for the desired filetype. But this is for AVI only. I use some wine apps (uTorrent, Soulseek, Eudora, to name a few) that can launch a wide range of files. Email attachments, for example, can be JPG, DOC, PDF, PPS... its impossible (and not desirable) to track down all possible file types that one may receive in an email or download in a torrent. So I neeed a solution to be more generic and broad. I need the file association to honor whatever native app is currently configured. And I want this to be done for all file types configured in my system. I've already figured out how to make the solution generic. Simply replacing the launched app in .reg for winebrowser, like this: [HKEY_CLASSES_ROOT\.pdf] @="PDFfile" "Content Type"="application/pdf" [HKEY_CLASSES_ROOT\PDFfile\Shell\Open\command] @="C:\\windows\\system32\\winebrowser.exe \"%1\"" Ive tested this and it works correctly. Since winebrowser uses xdg-open as a backend, and converts my windows path to a Unix one, the correct (Linux) app is launched. So I need a "batch" updater to wine's registry, sort of a wine-update-associations script that I can run whenever a new app is installed. Maybe a tool that can: List all Mime Types types in my system that have a default, installed app associated Extract all the needed info (glob, mime type, etc) Generate the .REG file in the above format The tricky part is: i've searched a LOT to find info about how association is done in Ubuntu 10.10 onwards, and documentation is scarce and confusing, to say the least. Freedesktop.org has no complete spec, and even Gnome docs are obsolete. So far I've gathered 4 files that contain association info, but im clueless on which (or why) to use, or how to use them to generate the .reg file: ~/.local/share/applications/mimeapps.list ~/.local/share/applications/miminfo.cache /usr/share/applications/miminfo.cache /etc/gnome/defaults.list Any help, script or explanation would be greatly appreciated! Thanks!

    Read the article

  • Browser language detection & content ranking for new language on the same site.

    - by Arnaud
    I've been reading a lot about it but it's still really hard to make up my mind. My understand is that if your website provide a link to the other language, this should not be an issue for google as long as your links are clear and clean, google will be able to make his way through it. The website was orginaly in french and I added the english version and I'm just worry that english speaker will just leave if the site is not in the correct language, for the home page I just wanted to get the value from the browser and redirect it to /fr/ or /en/ for the first page. (using php this will be very easy) Could you guys have a look at it and tell me what you think about it http://tinyurl.com/bpc5bn9 I don't want to get it wrong and lost my ranking with google. Also the website has good rank on the french side and the english has been online for 2 weeks and only get few visit a day, is that because all the back link refer to /fr/ and google is cleaver enough to decide that they are 2 differantes website and the back link will have to point to /en/ to increase the ranking value? Or will take few more weeks for the website to grow? Thanks for your hep

    Read the article

  • How to install nvidia drivers (GT 440 ) on Xubuntu 12.10 , screen res gone to 640x480 :(

    - by shaggyjack
    Hi all after days of endless googling I finally gave in and decided to try and directly ask for help. I have just installed Xubuntu and updated to 12.10 on a pretty old (12 years) machine, and I am now struggling to install the correct drivers for the nvidia 440 gt card.. I have managed to get "additional drivers" but the app does not show in the menu, I went through a few procedures which ended up in my screen going no higher than 640x480, and tried all the sudo apt-get variations with nvidia-current and current updates... I think I got the right version of the drivers ( 93.43.07 ) but they won't install from terminal as they say I am running an X server. So I learnt how to shut the graphic interface but then I try and install them from there but after I write the exact same command (sudo home/username/Downloads/NVIDIA-Linux-x86-96.43.07.pkg1.run ) nothing happens and the terminal says something like command not found. I am desperate for help if anybody could point me in the right direction that would be greatly appreciated. There are lots of similar topics on installing nvidia drivers but I seem to understand that current drivers are no good for my old GPU. So if anybody could show me how to install the right version that would be excellent. Thanks in advance! Jack

    Read the article

  • Wireframing: A Day In the Life of UX Workshop at Oracle

    - by ultan o'broin
    The Oracle Applications User Experience team's Day in the Life (DITL) of User Experience (UX) event was run in Oracle's Redwood Shores HQ for Oracle Usability Advisory Board (OUAB) members. I was charged with putting together a wireframing session, together with Director of Financial Applications User Experience, Scott Robinson (@scottrobinson). Example of stunning new wireframing visuals we used on the DITL events. We put on a lively show, explaining the basics of wireframing, the concepts, what it is and isn't, considerations on wireframing tool choice, and then imparting some tips and best practices. But the real energy came when the OUAB customers and partners in the room were challenge to do some wireframing of their own. Wireframing is about bringing your business and product use cases to life in real UX visual terms, by creating a low-fidelity drawing to iterate and agree on in advance of prototyping and coding what is to be finally built and rolled out for users. All the best people wireframe. Leonardo da Vinci used "cartoons" on some great works, tracing outlines first and using red ochre or charcoal dropped through holes in the tracing parchment onto the canvas to outline the subject. (Image distributed under Wikimedia commons license) Wireframing an application's user experience design enables you to: Obtain stakeholder buy-in. Enable faster iteration of different designs. Determine the task flow navigation paths (in Oracle Fusion Applications navigation is linked with user roles). Develop a content strategy (readability, search engine optimization (SEO) of content, and so on) Lay out the pages, widgets, groups of features, and so on. Apply usability heuristics early (no replacement for usability testing, but a great way to do some heavy-lifting up front). Decide upstream which functional user experience design patterns to apply (out of the box solutions that expedite productivity). Assess which Oracle Application Development Framework (ADF) or equivalent technology components can be used (again, developer productivity is enhanced downstream). We ran a lively hands-on exercise where teams wireframed a choice of application scenarios using the time-honored tools of pen and paper. Scott worked the floor like a pro, pointing out great use of features, best practices, innovations, and making sure that the whole concept of wireframing, the gestalt, transferred. "We need more buttons!" The cry of the energized. Not quite. The winning wireframe session (online shopping scenario) from the Applications UX DITL event shown. Great fun, great energy, and great teamwork were evident in the room. Naturally, there were prizes for the best wireframe. Well, actually, prizes were handed out to the other attendees too! An exciting, slightly different aspect to delivery of this session made the wireframing event one of the highlights of the day. And definitely, something we will repeat again when we get the chance. Thanks to everyone who attended, contributed, and helped organize.

    Read the article

  • Delete the pendrive contents and also trash in Mac OS X?

    - by Warrior
    I am using a MacBook Pro. I copied some data from my pen drive to my Mac and deleted the content by moving it to trash. After that when I see the info of pen drive it give more value than the original value. If I cleaned the content of the trash only I am able to see the correct value of pen drive and able to copy data. Has Mac been designed like that or is there some other way to delete other than using the "move to trash" option? Thanks.

    Read the article

  • ITT Corporation Goes Live on Oracle Sales and Marketing Cloud Service (Fusion CRM)!

    - by Richard Lefebvre
    Back in Q2 of FY12, a division of ITT invited Oracle to demo our CRM On Demand product while the group was considering Salesforce.com. Chris Porter, our Oracle Direct sales representative learned the players and their needs and began to develop relationships. We lost that deal, but not Chris's persistence. A few months passed and Chris called on the ITT Shape Cutting Division's Director of Sales to see how things were going. Chris was told that the plan was for the division to buy more Salesforce.com. In fact, he informed Chris that he had just sent his team to Salesforce.com training. During the conversation, Chris mentioned that our new Oracle Sales Cloud Service could run with Outlook. This caused the ITT Sales Director to reconsider the plan to move forward with our competition. Oracle was invited back to demo the Oracle Sales and Marketing Cloud Service (Fusion CRM) and after it concluded, the Director stated, "That just blew your competition away." The deal closed on June 5th , 2012 Our Oracle Platinum Partner, Intelenex, began the implementation with ITT on July 30th. We are happy to report that on September 18th, the ITT Shape Cutting Division successfully went live on Oracle Sales and Marketing Cloud Service (Fusion CRM). About: ITT is a diversified leading manufacturer of highly engineered critical components and customized technology solutions for growing industrial end-markets in energy infrastructure, electronics, aerospace and transportation. Building on its heritage of innovation, ITT partners with its customers to deliver enduring solutions to the key industries that underpin our modern way of life. Founded in 1920, ITT is headquartered in White Plains, NY, with 8,500 employees in more than 30 countries and sales in more than 125 countries. The ITT Shape Cutting Division provides plasma lasers and controls with the Burny, Kaliburn, and AMC brands. Oracle Fusion Products: Oracle Sales and Marketing Cloud Service (Fusion CRM) including: • Fusion CRM Base • Fusion Sales Cloud • Fusion Mobile and Desktop Integration • Automated Forecasting Adoption Model: SaaS Partner: Intelenex Business Drivers: The ITT Shape Cutting Division wanted to: better enable its Sales Force with email and mobile CRM capabilities simplify and automate its complex sales processes centrally manage and maintain customer contact information Why We Won: ITT was impressed with the feature-rich capabilities of Oracle Sales and Marketing Cloud Service (Fusion CRM), including sales performance management and integration. The company also liked the product's flexibility and scalability for future growth. Expected Benefits: Streamlined accurate forecasting Increased customer manageability Improved sales performance Better visibility to customer information

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >