Search Results

Search found 6295 results on 252 pages for 'git push'.

Page 124/252 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • PHP application failed to connect after a network plugged back in

    - by tntu
    My data-center appears to have had some issues with their network and thus my server has suffered from on an off network connectivity for about an hour. After the connection has been completely re-established my code still kept reporting the same issue over and over until I have restarted the service. The code is a simple PHP code that loops forever checking the Apple feed-back server and then sleeps for a few minutes and then it begins all over again. Now I understand the error being generated if the network is down but once it got back up why did it continue until I have restarted the code? Does PHP have something that needs to be re-initialized or something?? Messges log: Dec 20 08:57:22 server kernel: r8169: eth0: link down Dec 20 08:57:28 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:29 server kernel: r8169: eth0: link down Dec 20 08:57:33 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:33 server kernel: r8169: eth0: link down Dec 20 08:57:37 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:38 server kernel: r8169: eth0: link down Dec 20 08:57:44 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:44 server kernel: r8169: eth0: link down Dec 20 08:57:52 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:52 server kernel: r8169: eth0: link down Dec 20 09:10:58 server kernel: r8169 0000:06:00.0: eth0: link up PHP Error: PHP Warning: stream_socket_client(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/push/feedback.php on line 36 Code Line 36: $apns = stream_socket_client('ssl://feedback.sandbox.push.apple.com:2196', $errcode, $errstr, 60, STREAM_CLIENT_CONNECT, $stream_context);

    Read the article

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • Routing Traffic With OpenVPN

    - by user224277
    Few minutes ago i configured my VPN server, and actually I can connect to my VPN but all trafic is going through my normal home network. On my OpenVPN application I've got an information : Server IP: **.185.***.*10 Client IP: 10.8.0.6 Traffic: 7.3 KB in, 5.6 KB out Connected: 10 June 2014 19:21:59 So everything is connected but how I can setup on windows 7 that all trafic have to go through OpenVPN network card ?? Client setting : client dev tun proto udp # enter the server's hostname # or IP address here, and port number remote **.185.***.*10 1194 resolv-retry infinite nobind persist-key persist-tun # Use the full filepaths to your # certificates and keys ca ca.crt cert user1.crt key user1.key ns-cert-type server comp-lzo verb 6 Server setting : port 1194 proto udp dev tun # the full paths to your server keys and certs ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh2048.pem cipher BF-CBC # Set server mode, and define a virtual pool of IP # addresses for clients to use. Use any subnet # that does not collide with your existing subnets. # In this example, the server can be pinged at 10.8.0.1 server 10.8.0.0 255.255.255.0 # Set up route(s) to subnet(s) behind # OpenVPN server push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" ifconfig-pool-persist /etc/openvpn/ipp.txt keepalive 10 120 status openvpn-status.log verb 6 and sysctl : net.ipv4.ip_forward=1 Thank you for your time and help.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • CodePlex Daily Summary for Friday, May 28, 2010

    CodePlex Daily Summary for Friday, May 28, 2010New ProjectsBang: BangBox Office: Event Management for Community Theater Groups: Box Office is an event management web application to help theater groups manage & promote their shows. Manage performance schedules, sell tickets, ...CellsOnWeb: El espacio de las células del Programa Académico Microsoft en Argentina. CRM 4.0 Plugin Queue Item Counter: This is a crm 4.0 plugin to count queue items in each folder and display the number at the end of the name. For example, if the queue name is "Tes...Date Calculator: Date Calculator is a small desktop utility developed using Windows Forms .NET technology. This utility is analogous to the "Date calculation" modul...Enterprise Library Investigate: Enterprise Library Investigate ProjecteProject Management: Ứng dụng nền tảng web hỗ trợ quản lí và giám sát tiến độ dự án của tổ chức doanh nghiệp.Fiddler TreeView Panel Extension: Extension for Fiddler, to display the session information in a TreeView panel instead of the default ListBox, so it groups the information logicall...Git Source Control Provider: Git Source Control Provider is a Visual Studio Plug-in that integrates Git with Visual Studio.InspurProjects: Project on Inspur Co.Kryptonite: The Kryptonite project aims to improve development of websites based on the Kentico CMS. MLang .NET Wrapper: Detect the encoding of a text without BOM (Byte Order Mask) and choose the best Encoding for persistence or network transport of textMondaze: Proof of concept using Windows Azure.MultipointControls: A collection of controls that applied Windows Multipoint Mouse SDK. Windows Multipoint Mouse SDK enable app to have multiple mice interact simultan...Mundo De Bloques: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...MyRPGtests: Just some tests :)OffInvoice Add-in for MS Office 2010: Project Description: The project it's based in the ability to extend funtionality in the Microsoft Office 2010 suite.OpenGraph .NET: A C# client for the Facebook Graph API. Supports desktop, web, ASP.NET MVC, and Silverlight connections and real-time updates. PLEASE NOTE: I dis...Portable Extensible Metadata (PEM) Data Annotation Generator: This project intends to help developers who uses PEM - Portable Extensible Metadata for Entity Framework generating Data Annotation information fro...Production and sale of plastic window systems: Automation company produces window design, production and sale of plastic window systems, management of sales contracts and their execution, print ...Renjian Storm (Renjian Image Viewer Uploader): Renjian Image Viewer UploaderShark Web Intelligence CMS: Shark Web Intelligence Inc. Content Management System.Shuffleboard Game for Windows Phone 7: This is a sample Shuffleboard game written in Silverlight for Windows Phone 7. It demonstrates physics, procedural animation, perspective transform...Silverlight Property Grid: Visual Studio Style PropertyGrid for Silverlight.SvnToTfs: Simple tool that migrates every Subversion revision toward Team Foundation Server 2010. It is developed in C# witn a WPF front-end.Tamias: Basic Cms Mvc Contrib Portable Area: The goal of this project is to have a easy-to-integrate basic cms for ASP.NET MVC applications based on MVC Contrib Portable Areas.TwitBy: TwitBy is a Twitter client for anyone who uses Twitter. It's easy to use and all of the major features are there. More features to come. H...Under Construction: A simple site that can be used as a splash for sites being upgraded or developed. UO Editor: The Owner & Organisation Editor makes it easy to view and edit the names of the registered owner and registered organization for your Windows OS. N...webform2010: this is the test projectWireless Network: ssWiX Toolset: The Windows Installer XML (WiX) is a toolset that builds Windows installation packages from XML source code. The toolset supports a command line en...Xna.Extend: A collection of easy to use Xna components for aiding a game programmer in developing thee next big thing. I plan on using the components from this...New ReleasesA Guide to Parallel Programming: Drop 4 - Guide Preface, Chapters 1 - 5, and code: This is Drop 4 with Guide Preface, Chapters 1 - 5, and References, and the accompanying code samples. This drop requires Visual Studio 2010 Beta 2 ...Ajax Toolkit for ASP.NET MVC: MAT 1.1: MAT 1.1Community Forums NNTP bridge: Community Forums NNTP Bridge V09: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release solves ...Community Forums NNTP bridge: Community Forums NNTP Bridge V10: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V11: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...CSS 360 Planetary Calendar: Beta Release: =============================================================================== Beta Release Version: 0.2 Description: This is the beta release de...Date Calculator: DateCalculator v1.0: This is the first release and as far as I know this is a stable version.eComic: eComic 2010.0.0.4: Version 2010.0.0.4 Change LogFixed issues in the "Full Screen Control Panel" causing it to lack translucence Added loupe magnification control ...Expression Encoder Batch Processor: Runtime Application v0.2: New in this version: Added more error handling if files not exist. Added button/feature to quit after current encoding job. Added code to handl...Fiddler TreeView Panel Extension: FiddlerTreeViewPanel 0.7: Initial compiled version of the assembly, ready to use. Please refer to http://fiddlertreeviewpanel.codeplex.com/ for instructions and installation.Gardens Point LEX: Gardens Point LEX v1.1.4: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.4 corre...Gardens Point Parser Generator: Gardens Point Parser Generator v1.4.1: Version 1.4.1 differs from version 1.4.0 only in containing a corrected version of a previously undocumented feature which allows the generation of...IsWiX: IsWiX 1.0.264.0: Build 1.0.264.0 - built against Fireworks 1.0.264.0. Adds support for autogenerating the SourceDir prepreprocessor variable and gives user choice t...Matrix: Matrix 0.5.2: Updated licenseMesopotamia Experiment: Mesopotamia 1.2.90: Release Notes - Ugraded to Microsoft Robotics Developer Studio 2008 R3 Bug Fixes - Fix to keep any sole organisms that penetrate to the next fitne...Microsoft Crm 4.0 Filtered Lookup: Microsoft Crm 4.0 Filtered Lookup: How to use: Allow passing custom querystring values: Create a DWORD registry key named [DisableParameterFilter] under [HKEY_LOCAL_MACHINE\SOFTWAR...MSBuild Extension Pack: May 2010: The MSBuild Extension Pack May 2010 release provides a collection of over 340 MSBuild tasks. A high level summary of what the tasks currently cover...MultiPoint Vote: MultiPointVote v.1: This accepts user inputs: number of participants, poll/survey title and the list of options A text file containing the items listed line per line...Mundo De Bloques: Mundo de Bloques, Release 1: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...OffInvoice Add-in for MS Office 2010: OffInvoice for Office 2010 V1.0 Installer: Add-in for MS Word 2010 or MS Excel 2010 to allow the management (issuing, visualization and reception) of electronic invoices, based in the XML fo...OpenGraph .NET: 0.9.1 Beta: This is the first public release of OpenGraph .NET.patterns & practices: Composite WPF and Silverlight: Prism v2.2 - May 2010 Release: Composite Application Guidance for WPF and Silverlight - May 2010 Release (Prism V2.2) The Composite Application Guidance for WPF and Silverlight ...Portable Extensible Metadata (PEM) Data Annotation Generator: Release 49376: First release.Production and sale of plastic window systems: Yanuary 2009: NOTEBefore loading program, make sure you have installed MySQL and created DataBase that store in Source Code (look at below) Where Is The Source?...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT--3.2: The current version of the Programmable Software Development Environment has the capability of reading an optional text file in each source develop...Rapidshare Episode Downloader: RED 0.8.6: - Fixed Edit form to actually save the data - Added Bypass Validation to enable future episodes - Added Search parameter to Edit form - Added refr...Renjian Storm (Renjian Image Viewer Uploader): Renjian Storm 0.6: 人间风暴 v0.6 稳定版sELedit: sELedit v1.1b: + Fixed: when export and import items to text files, there was a bug with "NULL" bytes in the unicode stringShake - C# Make: Shake v0.1.21: Changes: FileTask CopyDir method modified, see documentationSharePoint Labs: SPLab7001A-ENU-Level100: SPLab7001A-ENU-Level100 This SharePoint Lab will teach how to analyze and audit WSP files. WSP files are somewhere in a no man's land between ITPro...SharePoint Rsync List: 1.0.0.3: Fix spcontext dispose bug in menu try and run jobs only on central admin server mark a single file failure if file not copied don't delete destinat...Shuffleboard Game for Windows Phone 7: Shuffleboard 1.0.0.1: Source code, solution files, and assets.Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+04: Sw. Is Hw. Lib. 3.0.0.x+04SoulHackers Demon Unite(Chinese version): WPFClient pre alpha: can unite 2, 3 or more demons. can un-unite 1 demon to 2 demon (no triple un-unite yet).Team Deploy: Team Deploy 2010 R1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comple...Under Construction: Under Construction: All Files required to show under construction page. The Page will pull through the Domain name that the site is being run on this allows you to use...Unit Driven: Version 0.0.5: - Tests nested by namespace parts. - Run buttons properly disabled based on currently running tests. - Timeouts for async tests enabled.UO Editor: UO Editor v1.0: Initial ReleaseVCC: Latest build, v2.1.30527.0: Automatic drop of latest buildWeb Service Software Factory Contrib: Import WSDL 2010: Generate Service Contract models from existing WSDL documents for Web Service Software Factory 2010. Usage: Install the vsix and right click on a S...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationSqlServerExtensionsBlogEngine.NETRawrpatterns & practices: Windows Azure Security GuidanceCodeReviewCustomer Portal Accelerator for Microsoft Dynamics CRMIonics Isapi Rewrite Filter

    Read the article

  • Overcoming the 1024 character limit with setx

    - by Madhur Ahuja
    I am trying to set environment variables using the setx command, such as follows setx PATH "f:\common tools\git\bin;f:\common tools\python\app;f:\common tools\python\app\scripts;f:\common tools\ruby\bin;f:\masm32\bin;F:\Borland\BCC55\Bin;%PATH%" However, I get the following error if the value is more then 1024 characters long: WARNING: The data being saved is truncated to 1024 characters. SUCCESS: Specified value was saved. But some of the paths in the end are not saved in variable, I guess due to character limit as the error suggests.

    Read the article

  • ruby on rails configuration

    - by Themasterhimself
    Im using the following guide for getting started with rails for ubuntu 9.10. http://guides.rails.info/getting_started.html I have installed both ruby and gem. gokul@gokul-laptop:~$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux] gokul@gokul-laptop:~$ gem -v 1.3.6 gokul@gokul-laptop:~$ For rails, gokul@gokul-laptop:~$sudo gem install rails doesnt seem to give any response. so used the synaptic package manager for installing it. And it seems to have installed correctly. gokul@gokul-laptop:~$ rails Usage: /usr/bin/rails /path/to/your/app [options] Options: -r, --ruby=path Path to the Ruby binary of your choice (otherwise scripts use env, dispatchers current path). Default: /usr/bin/ruby1.8 -d, --database=name Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite2/sqlite3/frontbase/ibm_db). Default: sqlite3 -D, --with-dispatchers Add CGI/FastCGI/mod_ruby dispatches code to generated application skeleton Default: false --freeze Freeze Rails in vendor/rails from the gems generating the skeleton Default: false -m, --template=path Use an application template that lives at path (can be a filesystem path or URL). Default: (none) Rails Info: -v, --version Show the Rails version number and quit. -h, --help Show this help message and quit. General Options: -p, --pretend Run but do not make any changes. -f, --force Overwrite files that already exist. -s, --skip Skip files that already exist. -q, --quiet Suppress normal output. -t, --backtrace Debugging: show backtrace on errors. -c, --svn Modify files with subversion. (Note: svn must be in path) -g, --git Modify files with git. (Note: git must be in path) Description: The 'rails' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going. gokul@gokul-laptop:~$ app folder is created with all the proper folders. The problem starts with the following commands... gokul@gokul-laptop:~$ sudo gem install bundler [sudo] password for gokul: Successfully installed bundler-0.9.24 1 gem installed Installing ri documentation for bundler-0.9.24... Installing RDoc documentation for bundler-0.9.24... gokul@gokul-laptop:~$ bundle install Could not locate Gemfile gokul@gokul-laptop:~$ coming to the database, the default sqlite3 seems to have installed correctly. gokul@gokul-laptop:~$ sqlite3 SQLite version 3.6.16 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite The welcome aboard page is not being able to be found at (http://localhost:3000) after executing the following commands... gokul@gokul-laptop:~/Desktop$ rails blog create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop$ cd blog gokul@gokul-laptop:~/Desktop/blog$ rake db:create (in /home/gokul/Desktop/blog) gokul@gokul-laptop:~/Desktop/blog$ rails server create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop/blog$ hope some one can help me with this...

    Read the article

  • Need help with setting up comet code

    - by Saif Bechan
    Does anyone know off a way or maybe think its possible to connect Node.js with Nginx http push module to maintain a persistent connection between client and browser. I am new to comet so just don't understand the publishing etc maybe someone can help me with this. What i have set up so far is the following. I downloaded the jQuery.comet plugin and set up the following basic code: Client JavaScript <script type="text/javascript"> function updateFeed(data) { $('#time').text(data); } function catchAll(data, type) { console.log(data); console.log(type); } $.comet.connect('/broadcast/sub?channel=getIt'); $.comet.bind(updateFeed, 'feed'); $.comet.bind(catchAll); $('#kill-button').click(function() { $.comet.unbind(updateFeed, 'feed'); }); </script> What I can understand from this is that the client will keep on listening to the url followed by /broadcast/sub=getIt. When there is a message it will fire updateFeed. Pretty basic and understandable IMO. Nginx http push module config default_type application/octet-stream; sendfile on; keepalive_timeout 65; push_authorized_channels_only off; server { listen 80; location /broadcast { location = /broadcast/sub { set $push_channel_id $arg_channel; push_subscriber; push_subscriber_concurrency broadcast; push_channel_group broadcast; } location = /broadcast/pub { set $push_channel_id $arg_channel; push_publisher; push_min_message_buffer_length 5; push_max_message_buffer_length 20; push_message_timeout 5s; push_channel_group broadcast; } } } Ok now this tells nginx to listen at port 80 for any calls to /broadcast/sub and it will give back any responses sent to /broadcast/pub. Pretty basic also. This part is not so hard to understand, and is well documented over the internet. Most of the time there is a ruby or a php file behind this that does the broadcasting. My idea is to have node.js broadcasting /broadcast/pub. I think this will let me have persistent streaming data from the server to the client without breaking the connection. I tried the long-polling approach with looping the request but I think this will be more efficient. Or is this not going to work. Node.js file Now to create the Node.js i'm lost. First off all I don't know how to have node.js to work in this way. The setup I used for long polling is as follows: var sys = require('sys'), http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write(new Date()); res.close(); seTimeout('',1000); }).listen(8000); This listens to port 8000 and just writes on the response variable. For long polling my nginx.config looked something like this: server { listen 80; server_name _; location / { proxy_pass http://mydomain.com:8080$request_uri; include /etc/nginx/proxy.conf; } } This just redirected the port 80 to 8000 and this worked fine. Does anyone have an idea on how to have Node.js act in a way Comet understands it. Would be really nice and you will help me out a lot. Recources used An example where this is done with ruby instead of Node.js jQuery.comet Nginx HTTP push module homepage Faye: a Comet client and server for Node.js and Rack To use faye I have to install the comet client, but I want to use the one supplied with Nginx. Thats why I don't just use faye. The one nginx uses is much more optimzed. extra Persistant connections Going evented with Node.js

    Read the article

  • Google Maps Polygon - Not creating independent

    - by ferronrsmith
    I am basically populating a map with polygons, I am using the flex sdk. The problem I am having is that each time it adds a parish it keep adding to the previous polygons and treating it as one. HELP!! <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <maps:Map xmlns:maps="com.google.maps.*" id="map" mapevent_mapready="onMapReady(event)" width="100%" height="100%" key="ABQIAAAAGe0Fqwt-nY7G2oB81ZIicRT2yXp_ZAY8_ufC3CFXhHIE1NvwkxRcFmaI_t1gtsS5UN6mWQkH9kIw6Q"/> <mx:Script> <![CDATA[ import com.google.maps.Color; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapType; import com.google.maps.controls.ZoomControl; import com.google.maps.overlays.Polygon; import com.google.maps.overlays.PolygonOptions; import com.google.maps.styles.FillStyle; import com.google.maps.styles.StrokeStyle; import mx.controls.Alert; import mx.utils.ColorUtil; import mx.utils.object_proxy; private var polys:Array = new Array(); private var labels:Array = new Array(); // private var pts:Array = new Array(); // private var poly:Polygon; // private var map:Map; private function onMapReady(event:Event):void { map.setCenter(new LatLng(18.070146,-77.225647), 9, MapType.NORMAL_MAP_TYPE); map.addControl(new ZoomControl()); map.enableScrollWheelZoom(); map.enableContinuousZoom(); getXml(); } public function getXml():void { var xmlString:URLRequest = new URLRequest("parish.xml"); var xmlLoader:URLLoader = new URLLoader(xmlString); xmlLoader.addEventListener("complete", readXml); } private function readXml(event:Event):void { var markersXML:XML = new XML(event.target.data); var markers:XMLList = markersXML..parish; //var markersCount:int = markers.length(); var i:Number; var t:Number; for(i=0; i < markers.length(); i++) { var marker:XML = markers[i]; var name:String = marker.@name; var colour:String = marker.@colour; // Alert.show(""); var the_p:XMLList = markers..point; var pts:Array = []; for(t=0; t < the_p.length(); t++) { var theparish:XML = the_p[t]; pts[t] = new LatLng(theparish.@lat,theparish.@lng); // Alert.show("working" + theparish.@lat); // var pts.push(new LatLng(theparish.@lat,theparish.@lng)); } var poly:Polygon = new Polygon(pts, new PolygonOptions({ strokyStyle: new StrokeStyle({ color: colour, thickness: 1, alpha: 0.2}), fillStyle: new FillStyle({ color: colour, alpha: 0.2}) })); //polys.push(poly); //labels.push(name); Alert.show("this"); pts = [] map.addOverlay(poly); } } /* public function createMarker(latlng:LatLng, name:String, address:String, type:String): Marker { var marker:Marker = new Marker(latlng, new MarkerOptions({icon: new customIcons[type], iconOffset: new Point(-16, -32)})); var html:String = "<b>" + name + "</b> <br/>" + address; marker.addEventListener(MapMouseEvent.CLICK, function(e:MapMouseEvent):void { marker.openInfoWindow(new InfoWindowOptions({contentHTML:html})); }); return marker; } */ ]]> </mx:Script> </mx:Application>

    Read the article

  • Could you share your emacs dot-files for web development

    - by Gok Demir
    Hi, could you kindly share your emacs dot-files for web development that works with CSS, HTML, JavaScript, PHP and if possible with Python Django. I really need complete setup. I looked nXhtml and its good on some parts (html code completion works but sucks on indentation and CSS code completion does not work and says tag table is empty most cases. I really need something that works: code completion works out of the box, git integration and pretty indentation and supports multi-mode for mixed HTML, CSS, JavaScript, PHP code.

    Read the article

  • Where / how does Apache generate the HTML code used in the default directory listing?

    - by Ellen B
    I am looking to modify the HTML that apache generates for its default directory listing. I already know how to create a HEADER.html file that gets included for every directory listing. I am attempting to change the actual html that Apache generates for the file listing itself; right now my MacOS apache generates this for example: <table><tr><th><img src="/icons/blank.gif" alt="[ICO]"></th><th><a href="?C=N;O=D">Name</a></th><th><a href="?C=M;O=A">Last modified</a></th><th><a href="?C=S;O=A">Size</a></th><th><a href="?C=D;O=A">Description</a></th></tr><tr><th colspan="5"><hr></th></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="ios-prototype/">ios-prototype/</a> </td><td align="right">07-Dec-2012 16:47 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><td valign="top"><img src="/icons/folder.gif" alt="[DIR]"></td><td><a href="magneto-git/">magneto-git/</a> </td><td align="right">07-Dec-2012 16:46 </td><td align="right"> - </td><td>&nbsp;</td></tr> <tr><th colspan="5"><hr></th></tr> </table> I want a different HTML structure (like, say, an OL) generated when my server spits back directory listings. (FYI I'm doing a bunch of mobile browser prototyping with my local webserver & need to make it not totally horrible to browse with fingers to the right test directory — the table structure sucks, and while I can mod a lot of it with CSS it's still going to be ganky.)

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by Andrew
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • du excluding hard links possible?

    - by balor123
    I'm trying to determine how big a cloned Git repository is from a local file system. It creates hard links for some but not all files. How can I determine the disk usage of it? The best I can come up with is "du -a" right now with the original and again with the clone to determine the difference, since each hard linked file will be counted only once. Ideally, I would just run du on the clone and count each hard linked file zero times.

    Read the article

  • How can I automatically cycle a new image in an AWS Auto Scaling Group?

    - by JustinY
    I have a web application setup with a load balancer and auto scaling group to manage scaling. The source code is in a git repository so I don't have to update the images when the code changes, but occasionally the environment changes so we create a new image. Then that image needs to be cycled into the auto scaling group. Is there a way to cycle the images automatically? Right now I schedule a scale up and scale down action which gets rid of the old instances.

    Read the article

  • Open a file with eclipse via terminal and focus eclipse window

    - by Rui Carneiro
    I am a webdeveloper and my current working tools are: Terminal (ssh, tailing logs, grep, git, etc) Eclipse (PDT, Javascript, etc) Firefox (Developer Toolbar + Firebug) The problem is that I hate using the eclipse navigation tree. For me it is a lot easier to go to the Terminal and do something like this: $ eclipse /var/www/myproject/long/path/lib/Driver/Sql.php The annoying part is that the eclipse window is not focused after this command. I have to manually click on the eclipse window (using mouse... :@ grrr) Anyway to force eclipse to be focused?

    Read the article

  • ngGrid reusable filter AngularJS

    - by wootscootinboogie
    I have a business requirement that I filter a boolean value in my ngGrid. The filter has three states: only true, only false and both. Filtering like this seems to be a common enough use case that I should refactor that functionality out of my code for re use (possibly in a directive/filter?). I'd like to know how I can go about pulling out the customFilter function in my controller and make it so that I can pass the filter a property name on which to filter, and a value for selectedFilterOption. The code currently works, but I feel like this is a good chance to get better at angular :). So how can I pull out my filtering used here and make it a reusable piece of functionality? app.controller('DocumentController',function($scope,DocumentService) { $scope.filterOptions = { filterText: '', useExternalFilter: false }; $scope.totalServerItems =0; $scope.pagingOptions ={ pageSizes: [5,10,100], pageSize: 5, currentPage: 1 } //filter! $scope.dropdownOptions = [{ name: 'Show all' },{ name: 'Show active' },{ name: 'Show trash' }]; //default choice for filtering is 'show active' $scope.selectedFilterOption = $scope.dropdownOptions[1]; //three stage bool filter $scope.customFilter = function(data){ var tempData = []; angular.forEach(data,function(item){ if($scope.selectedFilterOption.name === 'Show all'){ tempData.push(item); } else if($scope.selectedFilterOption.name ==='Show active' && !item.markedForDelete){ tempData.push(item); } else if($scope.selectedFilterOption.name ==='Show trash' && item.markedForDelete){ tempData.push(item); } }); return tempData; } //grabbing data $scope.getPagedDataAsync = function(pageSize, page, filterValue, searchText){ var data; if(searchText){ var ft = searchText.toLowerCase(); DocumentService.get('filterableData.json').success(function(largeLoad){ //filter the data when searching data = $scope.customFilter(largeLoad).filter(function(item){ return JSON.stringify(item).toLowerCase().indexOf(ft) != -1; }) $scope.setPagingData($scope.customFilter(data),page,pageSize); }) } else{ DocumentService.get('filterableData.json').success(function(largeLoad){ var testLargeLoad = $scope.customFilter(largeLoad); //filter the data on initial page load when no search text has been entered $scope.setPagingData(testLargeLoad,page,pageSize); }) } }; //paging $scope.setPagingData = function(data, page, pageSize){ var pagedData = data.slice((page -1) * pageSize, page * pageSize); //filter the data for paging $scope.myData = $scope.customFilter(pagedData); $scope.myData = pagedData; $scope.totalServerItems = data.length; if(!$scope.$$phase){ $scope.$apply(); } } //watch for filter option change, set the data property of gridOptions to the newly filtered data $scope.$watch('selectedFilterOption',function(){ var data = $scope.customFilter($scope.myData); $scope.myData = data; $scope.getPagedDataAsync($scope.pagingOptions.pageSize, $scope.pagingOptions.currentPage); $scope.setPagingData($scope.myData,$scope.pagingOptions.currentPage,$scope.pagingOptions.pageSize); }) $scope.$watch('pagingOptions',function(newVal, oldVal){ if(newVal !== oldVal && newVal.currentPage !== oldVal.currentPage){ $scope.getPagedDataAsync($scope.pagingOptions.pageSize,$scope.pagingOptions.currentPage,$scope.filterOptions.filterText); } },true) $scope.message ="This is a message"; $scope.gridOptions = { data: 'myData', enablePaging: true, showFooter:true, totalServerItems: 'totalServerItems', pagingOptions: $scope.pagingOptions, filterOptions: $scope.filterOptions, showFilter: true, enableCellEdit: true, showColumnMenu: true, enableColumnReordering: true, enablePinning: true, showGroupPanel: true, groupsCollapsedByDefault: true, enableColumnResize: true } //get the data on page load $scope.getPagedDataAsync($scope.pagingOptions.pageSize, $scope.pagingOptions.currentPage); }); HTML

    Read the article

  • Javascript tabs: call event onclick

    - by Joris
    I got some code I need to change. it is build by others, and not very neat... There is a javascript tabcontrol, containing 4 tabs, which contains gridviews. All the 4 gridviews are build during the load of the page, but I want them to load, when you activate the tabs (as it is possible to watch the side, while you don't need the specific gridviews to see) So, my question is: how to call an event (that loads the gridview) from an javascript tab? how the tabs are generated: (generated code, I know, horrible) var obj = 0; var oid = 0; var otb = 0; var myTabs = new Array(); var myTabitems = new Array(); var myTabitem = new Array(); var myTabContent = new Array(); var myLists = new Array(); function showTabContent(tab) { tb = tab.obj; id = tab.nr; if (myTabs[tb].oid != -1) { myTabs[tb].myTabContent[myTabs[tb].oid].style.display = 'none'; myTabs[tb].myTabitem[myTabs[tb].oid].className -= " active"; } myTabs[tb].myTabContent[id].style.display = 'block'; myTabs[tb].myTabitem[id].className += " active"; myTabs[tb].oid = id; } function boTabs() { var myBlocks = new Array(); myBlocks = document.getElementsByTagName("div"); var stopit = myBlocks.length; for (var g = 0; g < stopit; g++) { if (myBlocks[g].className == "tabs") { myTabs.push(myBlocks[g]); } } var stopit2 = myTabs.length; for (var i = 0; i < stopit2; i++) { myTabs[i].myLists = myTabs[i].getElementsByTagName("ul"); if (myTabs[i].myLists[0].className == "tabs") { myTabs[i].myTabitems = myTabs[i].myLists[0].getElementsByTagName("li"); } var stopit3 = myTabs[i].myTabitems.length; myTabs[i].obj = i; myTabs[i].myTabitem = new Array(); myTabs[i].myTabContent = new Array(); for (var j = 0; j < stopit3; j++) { myTabs[i].myTabitem.push(myTabs[i].myTabitems[j]); myTabs[i].myTabitem[j].nr = j; myTabs[i].myTabitem[j].obj = i; myTabs[i].myTabitem[j].onclick = function() { showTabContent(this); }; } var myTabDivs = myTabs[i].getElementsByTagName("div"); for (var j = 0; j < myTabDivs.length; j++) { if (myTabDivs[j].className == "tabcontent") { myTabs[i].myTabContent.push(myTabDivs[j]); } } myTabs[i].myTabitem[0].className += " active"; myTabs[i].myTabContent[0].style.display = 'block'; myTabs[i].oid = 0; myTabDivs = null; } myBlocks = null; } onload = boTabs;

    Read the article

  • How could I refactor this into more manageable methods?

    - by ChaosPandion
    private static JsonStructure Parse(string jsonText, bool throwException) { var result = default(JsonStructure); var structureStack = new Stack<JsonStructure>(); var keyStack = new Stack<string>(); var current = default(JsonStructure); var currentState = ParserState.Begin; var invalidToken = false; var key = default(string); var value = default(object); foreach (var token in Lexer.Tokenize(jsonText)) { switch (currentState) { case ParserState.Begin: switch (token.Type) { case TokenType.OpenBrace: currentState = ParserState.ObjectKey; current = result = new JsonObject(); break; case TokenType.OpenBracket: currentState = ParserState.ArrayValue; current = result = new JsonArray(); break; default: invalidToken = true; break; } break; case ParserState.ObjectKey: switch (token.Type) { case TokenType.StringLiteral: currentState = ParserState.ColonSeperator; key = (string)token.Value; break; default: invalidToken = true; break; } break; case ParserState.ColonSeperator: switch (token.Type) { case TokenType.Colon: currentState = ParserState.ObjectValue; break; default: invalidToken = true; break; } break; case ParserState.ObjectValue: case ParserState.ArrayValue: switch (token.Type) { case TokenType.NumberLiteral: case TokenType.StringLiteral: case TokenType.BooleanLiteral: case TokenType.NullLiteral: currentState = ParserState.ItemEnd; value = token.Value; break; case TokenType.OpenBrace: structureStack.Push(current); keyStack.Push(key); currentState = ParserState.ObjectKey; current = new JsonObject(); break; case TokenType.OpenBracket: structureStack.Push(current); currentState = ParserState.ArrayValue; current = new JsonArray(); break; default: invalidToken = true; break; } break; case ParserState.ItemEnd: var jsonObject = (current as JsonObject); if (jsonObject != null) { jsonObject.Add(key, value); currentState = ParserState.ObjectKey; } var jsonArray = (current as JsonArray); if (jsonArray != null) { jsonArray.Add(value); currentState = ParserState.ArrayValue; } switch (token.Type) { case TokenType.CloseBrace: case TokenType.CloseBracket: currentState = ParserState.End; break; case TokenType.Comma: break; default: invalidToken = true; break; } break; case ParserState.End: switch (token.Type) { case TokenType.CloseBrace: case TokenType.CloseBracket: case TokenType.Comma: var previous = structureStack.Pop(); var previousJsonObject = (previous as JsonObject); if (previousJsonObject != null) { currentState = ParserState.ObjectKey; previousJsonObject.Add(keyStack.Pop(), current); } var previousJsonArray = (previous as JsonArray); if (previousJsonArray != null) { currentState = ParserState.ArrayValue; previousJsonArray.Add(current); } current = previous; if (token.Type != TokenType.Comma) { currentState = ParserState.End; } break; default: invalidToken = true; break; } break; default: break; } if (invalidToken) { if (throwException) { throw new JsonException(token); } return null; } } return result; }

    Read the article

  • Puppet permissions issue reported on client

    - by Jon Skarpeteig
    err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate': Error 400 on SERVER: Not authorized to call search on /file_metadata/plugins with {:ignore=>[".svn", "CVS", ".git"], :recurse=>true, :checksum_type=>"md5", :links=>"manage"} err: /File[/var/lib/puppet/lib]: Could not evaluate: Error 400 on SERVER: Not authorized to call find on /file_metadata/plugins Could not retrieve file metadata for puppet://example.com/plugins: Error 400 on SERVER: Not authorized to call find on /file_metadata/plugins What exactly causes this error, and how to fix it?

    Read the article

  • --prefix to /usr/local or /opt?

    - by Paul Alexander
    For building apps from source like git or rails I've seen recommendations to install in both /opt or /usr/local. From what I've read so for, the designated use for both is about the same and it amounts to merely a style issue. Is there any practical difference? Best practices?

    Read the article

  • xargs command works on ubuntu, but not mac

    - by Corey hart
    I have the following line of code that I use to update my personal date variable in my projects to today's current date. This line works in Ubuntu's terminal, but the Mac terminal seems to be far behind. Unfortunately, I copied this snippet from some site, so I'm not sure how it exactly works. Suggestions? grep -ilr --exclude=revar.sh --exclude=README.md "[DATE]" * | grep -v .git | xargs -i@ sed -i "s/\[DATE\]/${today}/g" @

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by misbehavens
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • Have suggestions for these assembly mnemonics?

    - by Noctis Skytower
    Greetings! Last semester in college, my teacher in the Computer Languages class taught us the esoteric language named Whitespace. In the interest of learning the language better with a very busy schedule (midterms), I wrote an interpreter and assembler in Python. An assembly language was designed to facilitate writing programs easily, and a sample program was written with the given assembly mnemonics. Now that it is summer, a new project has begun with the objective being to rewrite the interpreter and assembler for Whitespace 0.3, with further developments coming afterwards. Since there is so much extra time than before to work on its design, you are presented here with an outline that provides a revised set of mnemonics for the assembly language. This post is marked as a wiki for their discussion. Have you ever had any experience with assembly languages in the past? Were there some instructions that you thought should have been renamed to something different? Did you find yourself thinking outside the box and with a different paradigm than in which the mnemonics were named? If you can answer yes to any of those questions, you are most welcome here. Subjective answers are appreciated! Stack Manipulation (IMP: [Space]) Stack manipulation is one of the more common operations, hence the shortness of the IMP [Space]. There are four stack instructions. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item Arithmetic (IMP: [Tab][Space]) Arithmetic commands operate on the top two items on the stack, and replace them with the result of the operation. The first item pushed is considered to be left of the operator. add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo Heap Access (IMP: [Tab][Tab]) Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack. save Store load Retrieve Flow Control (IMP: [LF]) Flow control operations are also common. Subroutines are marked by labels, as well as the targets of conditional and unconditional jumps, by which loops can be implemented. Programs must be ended by means of [LF][LF][LF] so that the interpreter can exit cleanly. L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller halt End the program I/O (IMP: [Tab][LF]) Finally, we need to be able to interact with the user. There are IO instructions for reading and writing numbers and individual characters. With these, string manipulation routines can be written. The read instructions take the heap address in which to store the result from the top of the stack. print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack Question: How would you redesign, rewrite, or rename the previous mnemonics and for what reasons?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >