Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 478/1620 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • Structuring input of data for localStorage

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject(files): Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Pull/Clone a svn repository into hg with new default branch name?

    - by TheLQ
    I'm forking a project's SVN repo and need to integrate into my Mercurial repo. To keep things simple I have a local hgsubversion repo and a local hg repo. However both the mercurial and hgsubversion repo uses default as their default branch name. My goal here is to put the original code and updates on one branch and my code on the default branch However I have yet to be able to do this. W:\programming\tcsite-svn-test>hg clone http://*HG_SITE*/hg . no changes found updating to branch default 0 files updated, 0 files merged, 0 files removed, 0 files unresolved W:\programming\tcsite-svn-test>hg branch blizzard marked working directory as branch blizzard W:\programming\tcsite-svn-test>hg commit W:\programming\tcsite-svn-test>hg log changeset: 0:be13a9580df0 branch: blizzard tag: tip user: Leon Blakey <[email protected]> date: Fri Jan 14 23:44:25 2011 -0500 summary: Created Blizzard Branch W:\programming\tcsite-svn-test>hg pull http://*SVN_SITE*/svn/ pulling from http://*SVN_SITE*/svn/ .... pulled 23 revisions (run 'hg update' to get a working copy) W:\programming\tcsite-svn-test>hg branch blizzard W:\programming\tcsite-svn-test>hg branches default 23:93642a8890ab <------ blizzard 0:be13a9580df0 Not surprisingly, hgsubversion puts pulled commits into the default branch when I really need them in the blizzard branch. From the docs, there is no way to rename the branch that a commit came from. Frustratingly I can't even come up with a way to do it on a repo with only the hgsubversion repo being pulled from, nothing else. All commits are tied to that one branch no matter what. Is there any suggestions on how to pull changes from an SVN repo and rename the branch to something else?

    Read the article

  • How to download a file from a UNC mapped share via IIS and ASP

    - by helgeg
    I am writing an ASP application that will serve files to clients through the browser. The files are located on a file server that is available from the machine IIS is running on via a UNC path (\server\some\path). I want to use something like the code below to serve the file. Serving files that are local to the machine IIS is running on is working well with this method, my trouble is being able to serve files from the UNC mapped share: //Set the appropriate ContentType. Response.ContentType = "Application/pdf"; //Get the physical path to the file. string FilePath = MapPath("acrobat.pdf"); //Write the file directly to the HTTP content output stream. Response.WriteFile(FilePath); Response.End(); My question is how I can specify a UNC path for the file name. Also, to access the file share I need to connect with a specific username/password. I would appreciate some pointers on how I can achieve this (either using the approach above or by other means).

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Writing an audio player in C#

    - by Malki
    Hi, I have a pretty cool idea for a very special media player. I like to think about this project as a mini-startup, since I don't yet know if my idea is practical. Anyways, before implementing my idea, I first need to be able to implement a simple audio player. My preferred language for this project is C#, simply because it's so easy to use, but any other object oriented language would be fine too I guess. I started out with no knowledge whatsoever about audio. My main goals right now are: Being able to play audio files - as many formats as possible (sort of a VLC type player, but only audio for now). Being able to analyze audio files - as in, reading frequency, amplitude, volume, and other information about the audio. I think maybe a good idea here is to be able to analyze one file format (PCM?), and then temporarily converting any file I want to analyze to that format. This is in order to later implement a mechanism that compares songs and identifies similar songs to recommend to the user (this feature isn't part of my idea, but I figured since it exists in many players nowadays, I need to have it too if I want be able to compete with them). BTW - I currently don't have any knowledge about audio/wavelengths/frequencies and such, so I'd appreciate it if someone could point me in the right direction about this analyzation feature. Maybe in the future I'd expand to playing video files as well, but for now I'm concentrating on audio. After searching the Internet for a while, I've come across LAME. Problem is, it's not C#, and I'm not sure how to use it. I know there is something called "Interoperability", that is supposed to let me work with native DLL files through C#. Any information about that would be helpful as well. Any help would be much appreciated. Thanks, Malki :)

    Read the article

  • Ant: use include and exclude together

    - by Ken
    OK, this seems like it should be really simple. I'm using Apache Ant 1.8, and I have a target which does: <delete file="output/program.tar.bz2"/> <tar basedir="input" destfile="output/program.tar.bz2" compression="bzip2"> <tarfileset dir="input"> <include name="goodfolder1/**"/> <include name="goodfolder2/**"/> <exclude name="**/badfile"/> <exclude name="**/*.badext"/> </tarfileset> </tar> I want it to make a .tar.bz2 of input/goodfolder1 and input/goodfolder2, excluding files named "badfile", and excluding files with extension ".badext". It's giving me a .tar.bz2, but it's including badfile and *.badext -- the excludes seem to be ignored. The order of include/exclude doesn't seem to make a difference. I tried wrapping the includes/excludes in a (the docs say it's implicit?), but it made no difference. I'm sure there's something simple I'm missing, since the manual has a very similar example, though in a somewhat different context. EDIT: It looks like it could be related to the dir="input" attribute: it's adding everything in "input", and then adding everything in the tarfileset to that. Files I want appear twice in the program.tar.bz2, but files that are excluded only appear once. But dir is mandatory, and I don't see how this is different from the examples in the manual.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • jQuery - Creating a dynamic content loader using $.get()

    - by Kenny Bones
    Hello everybody! (hello dr.Nick) :) So I posted a question yesterday about a content loader plugin for jQuery I thought I'd use, but didn't get it to work. http://stackoverflow.com/questions/2469291/jquery-could-use-a-little-help-with-a-content-loader Although it works now, I see some disadvantages to it. It requires heaploads of files where the content is in. Since the code essentially picks up the url in the href link and searches that file for a div called #content What I would really like to do is to collect all of these files into a single file and give each div/container it's unique ID and just pick up the content from those. So i won't need so many separate files laying around. Nick Craver thought I should use $.get()instead since it's got a descent callback. But I'm not that strong in js at all.. And I don't even know what this means. I'm basically used to Visual Basic and passing of arguments, storing in txt files etc. Which is really not suitable for this purpose. So what's the "normal" way of doing things like this? I'm pretty sure I'm not the only one who's thought of this right? I basically want to get content from a single php file that contains alot of divs with unique IDs. And without much hassle, fade out the existing content in my main page, pick up the contents from the other file and fade it into my main page.

    Read the article

  • Where should an application's default folder live?

    - by HotOil
    Hi: I'm creating a little app that configures a connected device and then saves the config information in a file. The filename cannot be chosen by the user, but its location can be chosen. Where is the best place for the app's default save-to folder? I have seen examples out there where it is the "MyDocuments" location (eg Visual Studio does this). I have seen a folder created right at the top of the C:\ drive. I find that to be a little obnoxious, personally. It could be in the Program Files[Manufacturer] or Program Files[Product Name], or wherever the app was installed. I have used this location in the past; I dislike it because Windows Explorer does not allow a user to browse to there very easily ('browsability'). Going with this last notion that 'browsability' is a factor, I suppose MyDocuments is the best choice. Is this the most common, most widely accepted practice? I think historically we have chosen the install folder because that co-locates the data with the device management utilities. But I would really like to get away from that. I don't want the user to have to go pawing through system files to find his/her data, esp if that person is not too Windows-savvy. Also, I am using the .NET WinForms FolderBrowserDialog, and the "Environment.SpecialFolders" enum isn't helpful in setting up the dialog to point into the Program Files folder. Thanks for your input! Suz.

    Read the article

  • Pymedia video encoding failed

    - by user1474837
    I am using Python 2.5 with Windows XP. I am trying to make a list of pygame images into a video file using this function. I found the function on the internet and edited it. It worked at first, than it stopped working. This is what it printed out: Making video... Formating 114 Frames... starting loop making encoder Frame 1 process 1 Frame 1 process 2 Frame 1 process 2.5 This is the error: Traceback (most recent call last): File "ScreenCapture.py", line 202, in <module> makeVideoUpdated(record_files, video_file) File "ScreenCapture.py", line 151, in makeVideoUpdated d = enc.encode(da) pymedia.video.vcodec.VCodecError: Failed to encode frame( error code is 0 ) This is my code: def makeVideoUpdated(files, outFile, outCodec='mpeg1video', info1=0.1): fw = open(outFile, 'wb') if (fw == None) : print "Cannot open file " + outFile return if outCodec == 'mpeg1video' : bitrate= 2700000 else: bitrate= 9800000 start = time.time() enc = None frame = 1 print "Formating "+str(len(files))+" Frames..." print "starting loop" for img in files: if enc == None: print "making encoder" params= {'type': 0, 'gop_size': 12, 'frame_rate_base': 125, 'max_b_frames': 90, 'height': img.get_height(), 'width': img.get_width(), 'frame_rate': 90, 'deinterlace': 0, 'bitrate': bitrate, 'id': vcodec.getCodecID(outCodec) } enc = vcodec.Encoder(params) # Create VFrame print "Frame "+str(frame)+" process 1" bmpFrame= vcodec.VFrame(vcodec.formats.PIX_FMT_RGB24, img.get_size(), # Covert image to 24bit RGB (pygame.image.tostring(img, "RGB"), None, None) ) print "Frame "+str(frame)+" process 2" # Convert to YUV, then codec da = bmpFrame.convert(vcodec.formats.PIX_FMT_YUV420P) print "Frame "+str(frame)+" process 2.5" d = enc.encode(da) #THIS IS WHERE IT STOPS print "Frame "+str(frame)+" process 3" fw.write(d.data) print "Frame "+str(frame)+" process 4" frame += 1 print "savng file" fw.close() Could somebody tell me why I have this error and possibly how to fix it? The files argument is a list of pygame images, outFile is a path, outCodec is default, and info1 is not used anymore. UPDATE 1 This is the code I used to make that list of pygame images. from PIL import ImageGrab import time, pygame pygame.init() f = [] #This is the list that contains the images fps = 1 for n in range(1, 100): info = ImageGrab.grab() size = info.size mode = info.mode data = info.tostring() info = pygame.image.fromstring(data, size, mode) f.append(info) time.sleep(fps)

    Read the article

  • Does XAML work with file links in Visual Studio?

    - by Tim
    I'm adding a new WPF project to an existing Visual Studio solution and would like to reuse a bunch of code (C# and xaml) from an existing project within the solution. I've created the new project and added existing files as follows: Right click project Add - Add Existing Item Find the file to reuse, use the arrow next to "Add" and "Add as Link" I now have a nice project set up with all the proper links. However, XAML chokes on these links. For example: <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Resources\Elements\Buttons\Buttons.xaml" /> <ResourceDictionary Source="Resources\Elements\TextBox\TextBox.xaml" /> </ResourceDictionary.MergedDictionaries> The files "Buttons.xaml" and "TextBox.xaml" exist as links in my new project. The project builds, but when I run, I get the following XamlParseException: 'Resources\Elements\Buttons\Buttons.xaml' value cannot be assigned to property 'Source' of object 'System.Windows.ResourceDictionary'. Cannot locate resource 'resources/elements/buttons/buttons.xaml'. It seems like the XAML parser is requiring an actual copy of these XAML files to exist in my new project, instead of links. This is exactly what I'm trying to avoid. I want my project to share these files so that any changes get transferred to the other project without hunting and copying. Any insight is appreciated!

    Read the article

  • Locating SSL certificate, key and CA on server

    - by jovan
    Disclaimer: you don't need to know Node to answer this question but it would help. I have a Node server and I need to make it work with HTTPS. As I researched around the internet, I found that I have to do something like this: var fs = require('fs'); var credentials = { key: fs.readFileSync('path/to/ssl/private-key'), cert: fs.readFileSync('path/to/ssl/cert'), ca: fs.readFileSync('path/to/something/called/CA') }; var app = require('https').createServer(credentials, handler); I have several problems with this. First off, all the examples I found use completely different approaches. Some link to .pem files for both the certificate and key. I don't know what pem files are but I know my certificate is .crt and my key is .key. Some start off at the root folder and some seem to just have these .pem files in the application directory. I don't. Some use the ca thing too and some don't. This CA is supposed to be my domain's CA bundle according to some articles - but none explain where to find this file. In the ssl directory on my server I have one .crt file in the certs directory and one .key file in the keys directory, in addition to an empty csrs directory and an ssl.db file. So, where do I find these 3 files (key, cert, ca) and how do I link to them correctly?

    Read the article

  • asp file system object

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %<% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize % <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "" & sDir & "" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number < 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "Parent folder" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "" & objSubFolder.Name & "" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "" & objFile.Name & "" & sSize & "" & objFile.Type & "" & vbCRLF Next Response.Write "" % it works fine. but when i try to access something on shred path like: "\cvrdd0110:share" it gives error. how to access these files?

    Read the article

  • PHP gettext on Windows

    - by Axsuul
    There's some tutorials out there for gettext (w/ Poedit)... unfortunately, it's mostly for a UNIX environment. And even more unfortunate is that I am running my WAMP server on Windows XP (but I am developing for a UNIX environment) and none of the tutorials can get gettext working properly for me. From the man page (http://us3.php.net/manual/en/book.gettext.php), it appears that it's a different process on a Windows environment. I've tried out some of the solutions in the comments but I still can't get it to work! Please, I've spent many hours on this, hopefully someone can point me in the right direction to get this thing to work! (and I'm sure there are others out there who share my frustration). So far with my setup, I'm only getting output "Hello World!" whereas I should be getting the translated string. Here is my setup/code so far: <?php // test.php if (!defined('LC_MESSAGES')) { define('LC_MESSAGES', 6); } $locale = "deu_DEU"; // apparently the locales are different on a WINDOWS platform putenv("LC_ALL=$locale"); setlocale(LC_ALL, $locale); bindtextdomain("greetings", ".\locale"); textdomain("greetings"); echo _("Hello World"); ?> Folder structure root: C:\Program Files\WampServer 2\www test.php: C:\Program Files\WampServer 2\www\site .po: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.po .mo: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.mo Please advise! Thanks for your time :)

    Read the article

  • Convert C function to Objective C or alternative way to include C library?

    - by Moshe
    Background: I'm new to Objective-C and I've never touched C before. I'm trying to include a C library in my iPhone/iPod Touch project and I could not successfully compile the library's source code using the included instructions, so I've resorted to including the .h and .c files in Xcode. The library is for Hebrew Dates and corresponding Jewish Holidays. It can be found here: http://sourceforge.net/projects/libhdate/ I cannot seem to import the .c files into my implementation files. I can import the .h files though. When I try to use #import "file.c", where file.c is a file that is in XCode, it doesn't work. Why not? I've considered just writing the functions over in Objective-C, albeit only my needed functions and not the whole library. How can I make the following C function work in Objective-C? Are there other things that need to be included/re-coded/compiled? How so? I am almost certain something is missing, but I'm not sure what. Code: int hdate_get_omer_day(hdate_struct const * h) { int omer_day; hdate_struct sixteen_nissan; hdate_set_hdate(&sixteen_nissan, 16, 7, h->hd_year); omer_day = h->hd_jd - sixteen_nissan.hd_jd + 1; if ((omer_day > 49) || (omer_day < 0)) omer_day = 0; return omer_day; } So... should I be converting it, or trying somehow to compile to an appropriate format and how so? (I don't know what format a static library should be in, by the way, or if it should be static... - I'm so lost here....) I appreciate the help!

    Read the article

  • Finding the width of a directed acyclic graph... with only the ability to find parents

    - by Platinum Azure
    Hi guys, I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list. The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table. That is the ONLY ability I have at this point, without changing the code severely. The code has been in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node. I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed. Thanks! EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.

    Read the article

  • Silverlight Business Application template with WCF is throwing warning.

    - by Manoj
    Hi, I am using the Silvelight Business Application template. I wrote a function which uses Membership.getUserList function to return the user list. I tried exposing it as Service using WCF. But when I try to compile the client side code it throws a warning saying "Client Proxy Generation for user_authentication.Web.Service1 failed'. Why does it happen? The complete warning message is: Warning 4 Client proxy generation for service 'user_authentication.Web.Service1' failed: Generating metadata files... Warning: Unable to load a service with configName 'user_authentication.Web.Service1'. To export a service provide both the assembly containing the service type and an executable with configuration for this service. Details:Either none of the assemblies passed were executables with configuration files or none of the configuration files contained services with the config name 'user_authentication.Web.Service1'. Warning: No metadata files were generated. No service contracts were exported. To export a service, use the /serviceName option. To export data contracts, specify the /dataContractOnly option. This can sometimes occur in certain security contexts, such as when the assembly is loaded over a UNC network file share. If this is the case, try copying the assembly into a trusted environment and running it.

    Read the article

  • Using IAM for user authentication

    - by mdavis6890
    I've read lots and lots of posts that touch on what I think should be a very common use case - but without finding exactly what I want, or a simple reason why it can't be done. I have some files on S3. I want to be able to grant certain users access to certain files, via a front end that I build. So far, I've made it work this way: I built the front end in Django, using it's built-in Users and Groups I have a model for Buckets, in which I mirror my S3 buckets. I have a m2m relationship from groups to buckets representing the S3 permissions. The user logs in and authenticates against Django's users. I grab from Django the list of buckets that the user is allowed to see I use boto to grab a list of links to files from those buckets and display to user. This works, but isn't ideal, and also just doesn't feel right. I've got to keep a mirror of the buckets, and I also have to maintain my own list of user/passwords and permissions, when AWS already has all that built in. What I really want is to simply create the users in IAM and use group permissions in IAM to control access to the S3 buckets. No duplication of data or function. My app would request a UN/PW from the user and use that to connect to IAM/S3 to pull the list of buckets and files, then display links to the user. Simple. How can I, or why can't I? Am I looking at this the wrong way? What's the "right" way to address this (I assume) very common use case?

    Read the article

  • How to determine a text block of a file in one version come from which file in the previous version?

    - by Muhammad Asaduzzaman
    The problem is described below: Suppose I have a list of files in one version(say A,B,C,D). In the next version I have the following files(A,E,F,G). There are some similarities in their contents. The files in the later version comes from the previous version by file name renaming, content addition, deletion or partial modification or without any change( for example A is not changed). I take a block of text from a file(E, 2nd version) and check which files(in the 1st version) contain this text block. I found that B,C and D contain the text fragment. I want to determine from which file(B or c or d) this text block actually comes from.(I assume that E is a file whose name change in the second version). Since the contents may be changed, added or deleted in the later version, so in order to determine similarity I use LCS algorithm. But I cannot map the file with its previous version. I think one possible approach might be to use the location information of the match text blocks. But this heuristics not always work. Is there any research or algorithm exist to find so. Any direction will be helpful. Thanks in advance.

    Read the article

  • wordpress 500 - Internal server error

    - by asad
    Hello Folks , I installed the wordpress 2.9.2 a few days ago and it works correctly. today , i want to use permlink feature of wordpress. I know , must modify my .htaccess file on my site root. but on my sub-domain root there is no any .htaccess file . so i create my .htacess file with follow content on sub-domain root (near index.php file): <files .htaccess> order allow,deny deny from all </files> ServerSignature Off <files wp-config.php> order allow,deny deny from all </files> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Options All -Indexes AddType x-mapp-php5 .php AddHandler x-mapp-php5 .php But after save it , i missed my blog . And i get follow error : 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. after this i remove the .htaccess file , but this was not correct. What i can do for it? Cheers

    Read the article

  • How to get OCI lib to work on red hat machine to work with R Oracle?

    - by Matt Bannert
    I need to get OCI lib working on my rhel 6.3 machine and I am experiencing some trouble with OCI headers files that can't be found. I have installed (using yum install) oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm because this official page it's all I need to run OCI. To test the whole thing in general I've installed sqplus64, which worked after I set export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib. Unfortunately the headers files couldn't be found after setting LD_LIBRARY_PATH. Actually I am not surprised because there is no include directory in any of these oracle paths. So the question is: Where do I get these missing header files from? Are they actually already there and I just can find them? Btw: I am doing this whole exercise because I want to use ROracle on my R Studio server and this R package depends on the OCI library. Once I am back in R territory the road gets much less bumpier for me. EDIT: this documentation helped me a little further. However, I guess I found some header files now in: "/usr/include/oracle/11.2/client64". But which variable do I have to set to this location?

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >