Search Results

Search found 1842 results on 74 pages for 'zend optimizer'.

Page 65/74 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • How to write a Compiler in C for C

    - by Kerb_z
    I want to write a Compiler for C. This is a Project for my College i am doing as per my University. I am an intermediate programmer in C, with understanding of Data Structures. Now i know a Compiler has the following parts: 1. Lexer 2. Parser 3. Intermediate Code Generator 4. Optimizer 5. Code Generator I want to begin with the Lexer part and move on to Parser. I am consulting the following book: Compilers: Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. The thing is that this book is highly theoretical and perplexing to me. I really appreciate the authors. But the point is i am not able to begin my project, as if i am blinded where to go. Need guidance please help.

    Read the article

  • Reuse select query in a procedure in Oracle

    - by Jer
    How would I store the result of a select statement so I can reuse the results with an in clause for other queries? Here's some pseudo code: declare ids <type?>; begin ids := select id from table_with_ids; select * from table1 where id in (ids); select * from table2 where id in (ids); end; ... or will the optimizer do this for me if I simply put the sub-query in both select statements?

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • Oracle (Old?) Joins - A tool/script for conversion?

    - by Grasper
    I have been porting oracle selects, and I have been running across a lot of queries like so: SELECT e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id(+) = d.department_id; ...and: SELECT last_name, d.department_id FROM employees e, departments d WHERE e.department_id = d.department_id(+); Are there any guides/tutorials for converting all of the variants of the (+) syntax? What is that syntax even called (so I can scour google)? Even better.. Is there a tool/script that will do this conversion for me (Preferred Free)? An optimizer of some sort? I have around 500 of these queries to port.. When was this standard phased out? Any info is appreciated.

    Read the article

  • Will unused deconstructors be optimized out?

    - by Brendan Long
    Assuming MyClass uses the default deconstructor (or no deconstructor), and this code: MyClass buffer[] = new MyClass[i]; // Construct N objects using placement new for(size_t i = 0; i < N; i++){ ~buffer[i]; } delete[] buffer; Is there any optimizer that would be able to remove this loop? Also, is there any way for my code to detect if MyClass is using an empty/default constructor?

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

  • SQL SERVER – When are Statistics Updated – What triggers Statistics to Update

    - by pinaldave
    If you are an SQL Server Consultant/Trainer involved with Performance Tuning and Query Optimization, I am sure you have faced the following questions many times. When is statistics updated? What is the interval of Statistics update? What is the algorithm behind update statistics? These are the puzzling questions and more. I searched the Internet as well many official MS documents in order to find answers. All of them have provided almost similar algorithm. However, at many places, I have seen a bit of variation in algorithm as well. I have finally compiled the list of various algorithms and decided to share what was the most common “factor” in all of them. I would like to ask for your suggestions as whether following the details, when Statistics is updated, are accurate or not. I will update this blog post with accurate information after receiving your ideas. The answer I have found here is when statistics are expired and not when they are automatically updated. I need your help here to answer when they are updated. Permanent table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in a table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Temporary table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in table is less than 6, statistics is updated for every 6 changes in table. If the number of rows in table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Table variable There is no statistics for Table Variables. If you want to read further about statistics, I suggest that you read the white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008. Let me know your opinions about statistics, as well as if there is any update in the above algorithm. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Why You Should Attend MySQL Connect, and Register Now

    - by Bertrand Matthelié
    MySQL Connect is taking place on September 29 and 30 in San Francisco. The early bird discount enabling you to save US$ 500 is only running for a few more days, until July 13. Are you still wondering if you should sign up? Here are 10 reasons why you definitely should: Learn from other companies how they tackled similar challenges to the ones you’re facing. Find out what they learned along the way, and how you can save time, money and a lot of troubles by avoiding repeating the same mistakes and applying the best practices they’ve developed. You’ll get the chance to hear from organizations including PayPal, Verizon, Twitter, Facebook, Ticketmaster, Ning, Mozilla, CERN, Yahoo! and more! Don’t miss this unique opportunity to meet the engineers developing and supporting the MySQL products in a single location. You’ll be able to ask them all your questions, which can represent a huge time and money saver. Acquire detailed knowledge about InnoDB, the MySQL Optimizer, High Availability strategies, improving performance and scalability, enhancing security and numerous other topics. You’ll hear it straight "from the horse’s mouth" as well as from other MySQL experts in the ecosystem. Get a better understanding about Oracle’s MySQL strategy and about the MySQL roadmap, so you can better plan where to use the MySQL database and MySQL Cluster for your next web, cloud-based and other applications. Get hands-on experience about improving performance with the MySQL Performance Schema, about using MySQL Utilities, MySQL Cluster and a lot more with eight different Hands-On Labs. Express your ideas, engage into discussions and help influence the MySQL roadmap during Birds-of-a-feather sessions about replication, backup, query optimizations and other topics. Meet partners and learn about third party tools that could be useful in your architecture. Immerse yourself into the MySQL universe and hang out with MySQL experts for two days. The discussions as well as the relationships you will create can be priceless and help you execute on your next projects in a much better and faster way. Register Now to save US$500 by taking advantage of the Early bird discount running until July 13. We’ll have parallel tracks so you should consider sending a few team members to make the most of the event. Are you attending or planning to attend Oracle OpenWorld or JavaOne? You can add MySQL Connect to your registration for only US$100! Finally, it’s always a lot of fun to attend a MySQL conference. The passion and the energy are contagious…and you’ll likely get plenty of new ideas. You will find all information about the program in the MySQL Connect Content Catalog. We look forward to seeing you there! You can also read interviews with Tomas Ulin and Ronald Bradford about MySQL Connect. Sponsorship and exhibit opportunities are still available for the conference. You will find more information here.

    Read the article

  • June IOUG events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Independent Oracle User Group (IOUG) Regional Events: June 11-12, 2012 – Broomfield, CO 2-Day Seminar- “ High Performance PL/SQL & Oracle Database 11g New Features” Steven Feuerstein, generally considered the world’s leading PL/SQL expert, will be presenting his all-new, 2-day, “Higher Performance PL/SQL and Oracle 11g PL/SQL New Features” seminar on June 11 & 12 at Level 3 Communications in Broomfield, Colorado.  This will be Steven’s first Denver seminar in almost 4  years.  Who knows when he will offer another? http://www.rmoug.org/ June 14, 2012 – Ottawa, Ontario Pythian’s Gwen Shapira puts on 3 great presentations focused on NoSQL, making OLTP run fast and Big Data. http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:1317735724699447::NO June 21, 2012 – Calgary, Alberta Big Data and Extreme Analytics Summit http://coug.ab.ca/ June 22, 2012 – Westborough, MA 10 Things You Probably Did Not Know? With Tom Kyte PL/SQL turns 23 years old this year. It was first introduced in 1988 with Oracle6 Database. This session looks at five technical things about PL/SQL you probably did not know: under-the-covers features that make PL/SQL quite simply the most efficient language with which to process data in the database. http://noug.com/  June 28/29, 2012 – Plano, Texas Jonathan Lewis Oracle Performance Seminars The DOUG (DALLAS ORACLE USERS GROUP) has invited SpeakTech to return to Dallas, and they’re bringing Jonathan Lewis! Topics are Beating the Oracle Optimizer – June 28, 2012, Trouble Shooting & Tuning – June 29, 2012 http://www.eventbrite.com/event/3082448687

    Read the article

  • New Slides - and a discussion about Dictionary Statistics

    - by Mike Dietrich
    First of all we have just upoaded a new version of the Upgrade and Migration Workshop slides with some added information. So please feel free to download them from here.The slides have one new interesting information which lead to a discussion I've had in the past days with a very large customer regarding their upgrades - and internally on the mailing list targeting an EBS database upgrade from Oracle 10.2 to Oracle 11.2. Why are we creating dictionary statistics during upgrade? I'd believe this forced dictionary statistics creation got introduced with the desupport of the Rule Based Optimizer in Oracle 10g. The goal: as RBO is not supported anymore we have to make sure that the data dictionary has fresh and non-stale statistics. Actually that would have led in Oracle 9i to strange behaviour in some databases - so in Oracle 9i this was strongly disrecommended. The upgrade scripts got hardcoded to create these stats. But during tests we had the following findings: It's important to create dictionary statistics the night before the upgrade. Not two weeks before, not 60 minutes before your downtime begins. But very close to the upgrade. From Oracle 10g onwards you'd just say: $ execute DBMS_STATS.GATHER_DICTIONARY_STATS; This is important to make sure you have fresh dictionary statistics during upgrade for performance reasons. Tests have shown that running an upgrade without valid dictionary statistics might slow down the whole upgrade by factors of 2x-3x. And it would be also a great idea post upgrade to create again fresh dictionary statistics when you've did suppress the stats creation during the upgrade process. Suppress? Yes, you could set this underscore parameter in the init.ora: _optim_dict_stats_at_db_cr_upg=FALSE to suppress the forced dictionary statistics collection during an upgrade. We believe strongly that (a) people using the default statistics creation process which will create dictionary statistics by default and (b) create fresh stats before upgrade on the dictionary. Therefore we find it save once you have followed our advice to use the underscore during upgrade. And we've taken out that forced statistics collection during upgrade in the next release of the database. Please note: If you are using the DBUA for the upgrade it will remove underscore parameters for the upgrade run to improve performance - which is generally a good idea. So you'll have to start the DBUA with that call: $ dbua -initParam "_optim_dict_stats_at_cb_cr_upg"=FALSE -Mike

    Read the article

  • MySQL Connect in Only 5 Days – Some Fun Stuff!

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} We’ve recently blogged about the various MySQL Connect sessions focused on MySQL Cluster, InnoDB, the MySQL Optimizer and MySQL Replication. But we also wanted to draw your attention to some great opportunities to network and have fun! That’s also part of what makes a good conference... MySQL Connect Reception San Francisco Hilton - Continental Ballroom 6:30 p.m.–8:30 p.m. A great opportunity to network with Oracle’s MySQL engineers, partners having a booth in the exhibition hall and just about everyone at MySQL Connect. Long time MySQL users will see many familiar faces, and new users will be able to build valuable relationships. A must attend reception for sure! Taylor Street Open House 7:00 p.m.–9:00 p.m. After two intense days at MySQL Connect, you’ll get the chance to relax and continue networking at the Taylor Street Café Open House on Sunday evening. Perhaps recharging batteries for a full week at Oracle OpenWorld… The Oracle OpenWorld Music Festival Starting on Sunday eve and running through the entire duration of Oracle OpenWorld, the first Oracle OpenWorld Musical Festival features some of today’s breakthrough musicians. It’s five nights of back-to-back performances in the heart of San Francisco. Registered Oracle conference attendees get free admission, so remember your badge when you head to a show. More information here. You can check out the full MySQL Connect program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • SQL Saturday 194 - Exeter

    - by Dave Ballantyne
    Many kudos goes to Jonathan and Annette Allen and the others on the team for confirming SQL Saturday 194 in Exeter on the 8th and 9th of March.  The event home page is here http://www.sqlsaturday.com/194/eventhome.aspx and I delighted that myself and Dave Morrison will be presenting a full day pre-con on the 8th on favourite subjects “TSQL and Internals”. Here is the full abstract : TSQL and internals - When faced with performance issues there are many lines of attack. Tuning the engine itself can get you so far, however for maximum effect you need to understand how the engine and how it translates SQL statements into performable actions. This is not a simple task, it is a massive task to deal with a multi-table join and the number of permutations can be immense. To back up this knowledge, we can create better performing TSQL and understand the impact that is has upon the engine and recognize the pitfalls and gotcha’s that exist in SQLServer. Ultimately, there is no ‘best way’ to perform a single task only many variations of ‘it depends’ , but now we can pick the most appropriate option for the required dataload. Over the years, there have been many myths and misconceptions have grown around the product, some have basis in older versions and some are just wrong. Continuing to build on the knowledge given so far these issue will be explored and broken down and proved or disproved. Finally we will look to the future and explore SQL Server 2012 and the new functionality that that brings and some of the common uses that we will be able to address. After completion of this days pre-con, attendees will have a more complete knowledge of execution plans, and how they relate to the physical and logical actions that SQLServer will be executing on their behalf. The attendees will also have a more rounded and fuller knowledge of TSQL and the implications of incorrectly defining a query. Dave is a fountain of knowledge on execution plans and optimizer internals and ,though i may flatter myself, I’m no shrinking violet when it comes to TSQL and such matters.  I hope that if you cant join us, then there are other pre-cons available from other experts in their fields that may ‘float you boat’ too.  The pre-con page is http://sqlsouthwest.co.uk/SQLSaturday_precon.htm Also, excitingly, this pre-con day is sponsored by Fusion-IO which is a great boon for the day. If you want a more of this then i am offering a 2 day TSQL course starting on the 19th of March. More details on this are available here

    Read the article

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • SSL for PHP on Windows Server 2003

    - by otobrglez
    Hi All! I have Windows Server 2003 R2 with Apache 2.2.4 and PHP 5.2.6. I want to access pages over https (SSL). And i get this error (Zend Framework GData): Unable to find the socket transport "ssl" - enter code here did you forget to enable it when you configured PHP? So what i did. I went to php.ini and i uncomented the line extension=php_openssl.dll I also installed Win32 OpenSSL. But nothing works. What sould i do?

    Read the article

  • PHP throwing XDebug errors ONLY in command line mode...

    - by Wilhelm Murdoch
    Hey, all! I've been having a few problems running PHP-based utilities within the command line ever since I enabled the XDebug. It runs just fine when executing script through a browser, but once I try an execute a script on the command line, it throws the following errors: h:\www\test>@php test.php PHP Warning: PHP Startup: Unable to load dynamic library 'E:\development\xampplite\php\ext\php_curl.dll' - The specified module could not be found in Unknown on line 0 PHP Warning: Xdebug MUST be loaded as a Zend extension in Unknown on line 0 h:\www\test> The script runs just fine after this, but it's something I can't seem to wrap my head around. Could it be a path issue within my php.ini config? I'm not sure if that's the case considering it throws the same error no matter where I access the @php environmental variable. Also, all paths within my php.ini are absolute. Not really sure what's going on here. Any ideas? Thanks!

    Read the article

  • Error when make "make install" PHP WebDav

    - by kron
    Hi, I'm having issues install PHP WebDAV onto Fedora8 - after downloading and running make install I get the following errors: [root@ip-18-192-114-35 dav]# make install /bin/sh /tmp/dav/libtool --mode=compile gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -o dav.lo gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -fPIC -DPIC -o .libs/dav.o /tmp/dav/dav.c:21:23: error: ne_socket.h: No such file or directory /tmp/dav/dav.c:22:24: error: ne_session.h: No such file or directory /tmp/dav/dav.c:23:22: error: ne_utils.h: No such file or directory /tmp/dav/dav.c:24:21: error: ne_auth.h: No such file or directory /tmp/dav/dav.c:25:22: error: ne_basic.h: No such file or directory /tmp/dav/dav.c:26:20: error: ne_207.h: No such file or directory /tmp/dav/dav.c:35: error: expected specifier-qualifier-list before 'ne_session' /tmp/dav/dav.c: In function 'dav_destructor_dav_session': /tmp/dav/dav.c:152: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:153: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:155: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:156: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:157: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:158: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c: In function 'cb_dav_auth': /tmp/dav/dav.c:194: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:194: error: 'NE_ABUFSIZ' undeclared (first use in this function) /tmp/dav/dav.c:194: error: (Each undeclared identifier is reported only once /tmp/dav/dav.c:194: error: for each function it appears in.) /tmp/dav/dav.c:195: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c: In function 'zif_webdav_connect': /tmp/dav/dav.c:212: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:212: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:213: error: 'ne_uri' undeclared (first use in this function) /tmp/dav/dav.c:213: error: expected ';' before 'uri' /tmp/dav/dav.c:215: error: 'uri' undeclared (first use in this function) /tmp/dav/dav.c:259: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:260: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:262: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:264: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:267: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:269: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:271: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c: In function 'get_full_uri': /tmp/dav/dav.c:304: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:307: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:314: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c: In function 'zif_webdav_get': /tmp/dav/dav.c:329: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:329: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:330: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:330: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:348: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:354: error: 'ne_accept_2xx' undeclared (first use in this function) /tmp/dav/dav.c:359: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:359: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_put': /tmp/dav/dav.c:377: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:377: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:378: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:378: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:396: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:405: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:405: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_delete': /tmp/dav/dav.c:422: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:422: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:423: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:423: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:441: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:448: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:448: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_mkcol': /tmp/dav/dav.c:465: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:465: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:466: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:466: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:484: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:491: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:491: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_copy': /tmp/dav/dav.c:510: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:510: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:511: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:511: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:539: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:550: error: 'NE_DEPTH_INFINITE' undeclared (first use in this function) /tmp/dav/dav.c:550: error: 'NE_DEPTH_ZERO' undeclared (first use in this function) /tmp/dav/dav.c:554: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:554: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_move': /tmp/dav/dav.c:573: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:573: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:574: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:574: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:598: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:611: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:611: error: invalid type argument of '->' make: *** [dav.lo] Error 1 Any help would be much appreciated. Thanks!

    Read the article

  • Need some help with Apache .htaccess

    - by Legend
    I am trying to setup an application that was built using the Zend framework. Let's say my subdomain is: http://subdomain.domain.com and that it points to the following: http://www.domain.com/projectdir/ The structure of the project dir is the following: application/ ... ... library/ ... ... public/ ... ... .htaccess The contents of the htaccess are: SetEnv APPLICATION_ENV production RewriteEngine On # skip existing files and folders RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # send everything to index RewriteRule ^.*$ index.php [NC,L] While this works, the child objects on the page are being directed to the domain i.e., the image URLs (and the CSS files etc.) are broken because they are being redirected to something like: http://www.domain.com/images/image.png Can someone please tell me how to fix this?

    Read the article

  • Can't make updates with LDAP from Linux box to Windows AD

    - by amburnside
    I have a webapp (built using Zend Framework - PHP) that runs on a Linux environment which needs to authenticate against Active Directory on a Windows server. So far my webapp can authenticate with LDAPS, but cannot perform any kind of write operation (add/update/delete). It can only read. I have configured my server as follows: I have exported the CA Certificate from my Windows AD server to /etc/opendldap/certs I have created a pem file based on this certificate using openssl I have update /etc/openldap/ldap.conf so that it knows where to look for the pem certificate: TLS_CACERT /etc/openldap/certs/xyz.internal.pem When I run my script, I get the following error: 0x35 (Server is unwilling to perform; 0000209A: SvcErr: DSID-031A1021, problem 5003 (WILL_NOT_PERFORM), data 0 ): Have I missed something with my configuration, which is causing the server to reject making updates to AD?

    Read the article

  • Wildcard dns as subdomain in localhost using apache

    - by Sankaranand
    Hi, i am developing a web application in Zend Framework running in apache server (xampp). The site can actually be accessed by http://localhost/sitename . It is present in c:/xampp/htdocs/sitename/ I wanted to create wildcard dns so a specific user can access my webpage like username.localhost.... i will fetch the username as parameter and show the customized settings for him.. Can someone help me with it... Do i have to first assign a servername for my localhost/sitename ? and then think about adding subdomain wildcards ? Please enlight me...

    Read the article

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • Is there a free ftp client that has macros

    - by wheresrhys
    At the moment I'm using filezilla to deploy new versions of a site to the live server. the trouble is that there are one or two config, bootstrap etc. files which are different for the live site and I have to be careful not to overwrite. Also there are big areas of code that never change (eg I use the zend framework, which is always the same). I'd like to be able to record a macro to upload the same bunch of files and folders every time, excluding subdirectories and files which shouldn't be overwritten. Does any ftp client offer this?

    Read the article

  • Security measures for CentOS

    - by cappuccinodrinker
    I have been tightening up my web server security and wanted to know what else I can do. I am running CentOS 5 with these measures: - All passwords to FTP, MySQL etc are generated from grc.com/passwords.htm and microsoft.com/protect/fraud/passwords/create.aspx (for the ones which cannot be too long). - Running iptables with all ports shut off except for http mail and smtp, the important ports like FTP SSH are blocked to all except my static office IP. There is also no response to pings. - Rootkit Hunter running daily - The server is PCI compliant according to Comodo - Not running any crappy made php apps, we use Zend Framework for our stuff and do have kayako installed and keep them up to date. Can't really think of anything else I can do... I could implement a brute force measure, but I think I already have by simply changing my SSH port to a number above 10000 and blocking it off with iptables.

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >