Search Results

Search found 1359 results on 55 pages for 'uploading'.

Page 50/55 | < Previous Page | 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Setup Remote Access in Windows Home Server

    - by Mysticgeek
    One of the many awesome features of Windows Home Server, is the ability to access your server and other computers on your network remotely. Today we show you the steps to enable Remote Access to your home server from anywhere you have an Internet connection. Remote Access in Windows Home Server has a lot of great features like uploading and downloading files from shared folders, accessing files from machines on your network, and controling machines remotely (on supported OS versions). Here we take a look at the basics of setting it up, choosing a domain name, and verifying you can connect remotely. Setup Remote Access in Windows Home Server Open the Windows Home Server Console and click on Settings. Next select Remote Access, it is off by default, just click the button to turn it on. Wait while your router is configured for remote access, when it’s complete click Next. Notice that it will enable UPnP, if you don’t wish to have that enabled, you can manually forward the correct ports. If you have any problems with the router being automatically configured, we’ll be taking a look at a more detailed troubleshooting guide in the future. The router is successfully configured, and we can continue to the next process of configuring our domain name. The Domain Name Setup Wizard will start. Notice you will need a Windows Live ID to set it up –which is typically your hotmail address. If you don’t already have one, you can get one here. Type in your Live ID email address and password and click Next… Agree to the Home Server Privacy Statement and the Live Custom Domains Addendum. If you’re concerned about privacy and want to learn more about the domain addendum, make sure to read about it before agreeing. There is nothing abnormal to point out about either statement, but if this is your first time setting it up, it’s good to review the information.   Now choose a name for the domain. You should select something that is easy to remember and identifies your home server. The name can contain up to 63 characters, numbers, letters, and hyphens…and must begin and end with a letter or number. When you have the name figured out click the Confirm button. Note: You can only register one domain name per Live ID. If the name isn’t already taken, you’ll get a confirmation message indicating it’s god to go. The wizard is complete and you can now access the home server from the URL provided. A few other things to point out after you’ve set it up…under Domain Name click on the Details button… Which pulls up the domain detail information and you can refresh the data to verify everything is working correctly. Or you can click the Configure button and then change or release your current domain name. Under Web site settings, you can change you site page headline to whatever you want it to be. Accessing Home Server Remotely After you’ve gotten everything setup for your home server domain, you can begin to access it when you’re away from home. Simply type in the domain address you created in the previous steps. The start page is rather boring…and to start accessing your data, click the Log On button in the upper right hand corner. Then enter in your home server credentials to gain access to your files, folders, and network computers. You won’t be able to log in with your administrator user account however, to protect security of your network. Once you’re logged in, you’ll be able to access different parts of your home server shares and network computers. Conclusion Now that you have Remote Access setup, you should be able to access and manage your files easily. Being able to access data from your home server remotely is great when you need to get certain files while on the road. The web UI is pretty self explanatory, works best in IE as ActiveX is required, and is smooth and easy to work with. In future articles we’ll be covering a lot more regarding remote access, including more of the available features, troubleshooting connection issues, and enabling access for other users. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerHow to Remote Desktop to the Actual Server Console on Windows 2003Use Windows Vista Aero through Remote Desktop ConnectionAccess Your MySQL Server Remotely Over SSHShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • Convert Excel File 'xls' to CSV, CAUTION: Bumps Ahead

    - by faizanahmad
    The task was to provide users with an interface where they can upload the 'csv' files, these files were to be processed and loaded to Database by a Console application. The code in Console application could not handle the 'xls' files so we thought, OK, lets convert 'xls' to 'csv' in the code, Seemed like fun. The idea was to convert it right after uploading within 'csv' file. As Microsoft does not recommend using the  Excel objects in ASP.NET, we decided to use the Jet engine to open xls. (Ace driver is used for xlsx) The code was pretty straight, can be found on following links: http://www.c-sharpcorner.com/uploadfile/yuanwang200409/102242008174401pm/1.aspx http://www.devasp.net/net/articles/display/141.html FIRST BUMP 'OleDbException (0x80004005): Unspecified error' ( Impersonation ): The ablove code ran fine in my test web site and test console application, but it gave an 'OleDbException (0x80004005): Unspecified error' in main web site, turns out imperonation was set to True and as soon as I changed it to False, it did work. on My XP box, web site was running under user                   'ASPNET'  with imperosnation set to FALSE                   'IUSR_*' i.e IIS guest user with impersonation set to TRUE The weired part was that both users had same rights on the folders I was saving files to and on Excel app in DCOM Config.  We decided to give it a try on Windows Server 2003 with web site set to windows authentication ( impersonation = true ) and yes it did work. SECOND BUMP 'External table not in correct format': I got this error with some files and it appeared that the file from client has some metadata issues  ( when I opened the file in Excel and try to save it ,excel  would give me this error saying File can not be saved in current format ) and the error was caused by that. Some people were able to reslove the error by using "Extended Properties=HTML Import;" in connection string. But it did not work for me. We decided to detour from here and use Excel object :( as we had no control on client setting the meta deta of Excel files. Before third bump there were a ouple of small thingies like 'Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80070005' Fix can be found at http://blog.crowe.co.nz/archive/2006/03/02/589.aspx THIRD BUMP ( Could not get rid of the EXCEL process  ):  I has all the code in place to 'Quiet' the excel, but, it just did not work. work around was done to Kill the process as we knew no other application on server was using EXCEL.  The normal steps to quite the excel application worked just fine in console application though.   FOURTH BUMP: Code worked with one file 1 on my machine and with the other file 2 code will break. and the same code will work perfectly fine with file 2 on some other machine . We moved it to QA  ( Windows Server 2003 )and worked with every file just perfect. But , then there was another problem: one user can upload it and second cant, permissions on folder and DCOM Conifg checked. Another Detour: Uplooad the xls as it is and convert in Console application.   Lesson Learnt:  If its 'xlsx' use 'ACE Driver' or read xml within excel as recommneded by MS. If xls and you know its always going to be properly formatted  'jet Engine'  Code: Imports Microsoft.Office.Interop Private Function ConvertFile(ByVal SourceFolder As String, ByVal FileName As String, ByVal FileExtension As String)As Boolean     Dim appExcel As New Excel.Application     Dim workBooks As Excel.Workbooks = appExcel.Workbooks     Dim objWorkbook As Excel.Workbook      Try                   objWorkbook = workBooks.Open(CompleteFilePath )                            objWorkbook.SaveAs(Filename:=CObj(SourceFolder & FileName & ".csv"), FileFormat:=Excel.XlFileFormat.xlCSV)       Catch ex As Exception         GenerateAlert(ex.Message().Replace("'", "") & " Error Converting File to CSV.")         LogError(ex )         Return False      Finally                      If Not(objWorkbook is Nothing) then               objWorkbook.Close(SaveChanges:=CObj(False))           End If           ReleaseObj(objWorkbook)                                      ReleaseObj(workBooks)           appExcel.Quit()           ReleaseObj(appExcel)                                 Dim proc As System.Diagnostics.Process           For Each proc In System.Diagnostics.Process.GetProcessesByName("EXCEL")               proc.Kill()           Next         DeleteSourceFile(SourceFolder & FileName & FileExtension)     End Try  Return True  End Function   Private Sub ReleaseObj(ByVal o As Object)     Try      System.Runtime.InteropServices.Marshal.ReleaseComObject(o)   Catch ex As Exception           LogError(ex )   Finally      o = Nothing    End Try End Sub     Protected Sub DeleteSourceFile(Byval CompleteFilePath As string)         Try             Dim MyFile As FileInfo = New FileInfo(CompleteFilePath)             If  MyFile.Exists Then                 File.Delete(CompleteFilePath)             Else              Throw New FileNotFoundException()             End If         Catch ex As Exception             GenerateAlert( " Source File could not be deleted.")              LogError(ex)         End Try     End Sub  The code to kill the process ( Avoid it if you can ): Dim proc As System.Diagnostics.Process For Each proc In System.Diagnostics.Process.GetProcessesByName("EXCEL")     proc.Kill() Next

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • How to get sound on macbook pro 4,1

    - by Thomas
    I have just installed Xubuntu 12.04.2. My soundcard is detected: thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 Everything is put to max in alsamixer and nothing is muted (all the sliders are on OO. My speakers do not work, but when I plug in a headphone I hear it very soft. When I connect my stereo and put the sound VERY loud (3-blocks-of-complaining-neighbours loud) I hear it on a normal level but crackling. I added options snd-hda-intel model=mbp5 amixer set IEC958 off to at the end of /etc/modprobe.d/alsa-base.conf. When it's still not working I tried everything here: https://help.ubuntu.com/community/SoundTroubleshooting 1 >>> list-sinks 1 sink(s) available. * index: 0 name: <alsa_output.pci-0000_00_1b.0.analog-stereo> driver: <module-alsa-card.c> flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY state: SUSPENDED suspend cause: IDLE priority: 9959 volume: 0: 100% 1: 100% 0: 0.00 dB 1: 0.00 dB balance 0.00 base volume: 100% 0.00 dB volume steps: 65537 muted: no current latency: 0.00 ms max request: 0 KiB max rewind: 0 KiB monitor source: 0 sample spec: s16le 2ch 44100Hz channel map: front-left,front-right Stereo used by: 0 linked by: 0 configured latency: 0.00 ms; range is 0.50 .. 371.52 ms card: 0 <alsa_card.pci-0000_00_1b.0> module: 4 properties: alsa.resolution_bits = "16" device.api = "alsa" device.class = "sound" alsa.class = "generic" alsa.subclass = "generic-mix" alsa.name = "ALC889A Analog" alsa.id = "ALC889A Analog" alsa.subdevice = "0" alsa.subdevice_name = "subdevice #0" alsa.device = "0" alsa.card = "0" alsa.card_name = "HDA Intel" alsa.long_card_name = "HDA Intel at 0x9b500000 irq 46" alsa.driver_name = "snd_hda_intel" device.bus_path = "pci-0000:00:1b.0" sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card0" device.bus = "pci" device.vendor.id = "8086" device.vendor.name = "Intel Corporation" device.product.name = "82801H (ICH8 Family) HD Audio Controller" device.form_factor = "internal" device.string = "front:0" device.buffering.buffer_size = "65536" device.buffering.fragment_size = "32768" device.access_mode = "mmap+timer" device.profile.name = "analog-stereo" device.profile.description = "Analog Stereo" device.description = "Built-in Audio Analog Stereo" alsa.mixer_name = "Realtek ALC889A" alsa.components = "HDA:10ec0885,106b3a00,00100103" module-udev-detect.discovered = "1" device.icon_name = "audio-card-pci" ports: analog-output-speaker: Speakers (priority 10000, available: unknown) properties: analog-output-headphones: Headphones (priority 9000, available: no) properties: active port: <analog-output-speaker> 2 and 3: Doesn't seem an permission issue, the sound is very far away (See opening paragraph). 4 thomas@thomas-pc:~$ sudo aplay -l **** List of PLAYBACK Hardware Devices **** Home directory /home/thomas not ours. card 0: Intel [HDA Intel], device 0: ALC889A Analog [ALC889A Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC889A Digital [ALC889A Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 5 thomas@thomas-pc:~$ find /lib/modules/`uname -r` | grep snd /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-hwdep.ko /lib/modules/3.2.0-48-generic/kernel/sound/core/snd-pcm.ko [.. huge lists continues ..] /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/pdaudiocf/snd-pdaudiocf.ko /lib/modules/3.2.0-48-generic/kernel/sound/pcmcia/vx/snd-vxpocket.ko thomas@thomas-pc:~$ 6 thomas@thomas-pc:~$ lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03) Subsystem: Apple Inc. Device 00a4 Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at 9b500000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 7 I guess it's supported. Linux mint and Xubuntu 13.04 had no trouble with sounds. Everything worked out of the box Thanks in advance Edit: alsa-info.sh output: WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' ALSA Information Script v 0.4.62 -------------------------------- This script visits the following commands/files to collect diagnostic information about your ALSA installation and sound related hardware. dmesg lspci lsmod aplay amixer alsactl /proc/asound/ /sys/class/sound/ ~/.asoundrc (etc.) See './alsa-info.sh --help' for command line options. WARNING: /etc/modprobe.d/alsa-base.conf line 45: ignoring bad line starting with 'amixer' Automatically upload ALSA information to www.alsa-project.org? [y/N] : y Uploading information to www.alsa-project.org ... Done! Your ALSA information is located at http://www.alsa-project.org/db/?f=6cffc584284d4c0b266eb53249824ef83d6c4e3e Please inform the person helping you. thomas@thomas-pc:~$

    Read the article

  • Simple Preferred time control using silverlight 3.

    - by mohanbrij
    Here I am going to show you a simple preferred time control, where you can select the day of the week and the time of the day. This can be used in lots of place where you may need to display the users preferred times. Sample screenshot is attached below. This control is developed using Silverlight 3 and VS2008, I am also attaching the source code with this post. This is a very basic example. You can download and customize if further for your requirement if you want. I am trying to explain in few words how this control works and what are the different ways in which you can customize it further. File: PreferredTimeControl.xaml, in this file I have just hardcoded the controls and their positions which you can see in the screenshot above. In this example, to change the start day of the week and time, you will have to go and change the design in XAML file, its not controlled by your properties or implementation classes. You can also customize it to change the start day of the week, Language, Display format, styles, etc, etc. File: PreferredTimeControl.xaml.cs, In this control using the code below, first I am taking all the checkbox from my form and store it in the Global Variable, which I can use across my page. List<CheckBox> checkBoxList; #region Constructor public PreferredTimeControl() { InitializeComponent(); GetCheckboxes();//Keep all the checkbox in List in the Load itself } #endregion #region Helper Methods private List<CheckBox> GetCheckboxes() { //Get all the CheckBoxes in the Form checkBoxList = new List<CheckBox>(); foreach (UIElement element in LayoutRoot.Children) { if (element.GetType().ToString() == "System.Windows.Controls.CheckBox") { checkBoxList.Add(element as CheckBox); } } return checkBoxList; } Then I am exposing the two methods which you can use in the container form to get and set the values in this controls. /// <summary> /// Set the Availability on the Form, with the Provided Timings /// </summary> /// <param name="selectedTimings">Provided timings comes from the DB in the form 11,12,13....37 /// Where 11 refers to Monday Morning, 12 Tuesday Morning, etc /// Here 1, 2, 3 is for Morning, Afternoon and Evening respectively, and for weekdays /// 1,2,3,4,5,6,7 where 1 is for Monday, Tuesday, Wednesday, Thrusday, Friday, Saturday and Sunday respectively /// So if we want Monday Morning, we can can denote it as 11, similarly for Saturday Evening we can write 36, etc /// </param> public void SetAvailibility(string selectedTimings) { foreach (CheckBox chk in checkBoxList) { chk.IsChecked = false; } if (!String.IsNullOrEmpty(selectedTimings)) { string[] selectedString = selectedTimings.Split(','); foreach (string selected in selectedString) { foreach (CheckBox chk in checkBoxList) { if (chk.Tag.ToString() == selected) { chk.IsChecked = true; } } } } } /// <summary> /// Gets the Availibility from the selected checkboxes /// </summary> /// <returns>String in the format of 11,12,13...41,42...31,32...37</returns> public string GetAvailibility() { string selectedText = string.Empty; foreach (CheckBox chk in GetCheckboxes()) { if (chk.IsChecked == true) { selectedText = chk.Tag.ToString() + "," + selectedText; } } return selectedText; }   In my example I am using the matrix format for Day and Time, for example Monday=1, Tuesday=2, Wednesday=3, Thursday = 4, Friday = 5, Saturday = 6, Sunday=7. And Morning = 1, Afternoon =2, Evening = 3. So if I want to represent Morning-Monday I will have to represent it as 11, Afternoon-Tuesday as 22, Morning-Wednesday as 13, etc. And in the other way to set the values in the control I am passing the values in the control in the same format as preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); So this will set the checkbox value for Morning-Monday, Morning-Tuesday, Morning-Wednesday, Morning-Saturday, Afternoon of Tuesday and Afternoon of Wednesday. To implement this control, first I have to import this control in xmlns namespace as xmlns:controls="clr-namespace:PreferredTimeControlApp" and finally put in your page wherever you want, <Grid x:Name="LayoutRoot" Style="{StaticResource LayoutRootGridStyle}"> <Border x:Name="ContentBorder" Style="{StaticResource ContentBorderStyle}"> <controls:PreferredTimeControl x:Name="preferredTimeControl"></controls:PreferredTimeControl> </Border> </Grid> And in the code behind you can just include this code: private void InitializeControl() { preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); } And you are ready to go. For more details you can refer to my code attached. I know there can be even simpler and better way to do this. Let me know if any other ideas. Sorry, Guys Still I have used Silverlight 3 and VS2008, as from the system I am uploading this is still not upgraded, but still you can use the same code with Silverlight 4 and VS2010 without any changes. May be just it will ask you to upgrade your project which will take care of rest. Download Source Code.   Thanks ~Brij

    Read the article

  • Basic shadow mapping fails on NVIDIA card?

    - by James
    Recently I switched from an AMD Radeon HD 6870 card to an (MSI) NVIDIA GTX 670 for performance reasons. I found however that my implementation of shadow mapping in all my applications failed. In a very simple shadow POC project the problem appears to be that the scene being drawn never results in a draw to the depth map and as a result the entire depth map is just infinity, 1.0 (Reading directly from the depth component after draw (glReadPixels) shows every pixel is infinity (1.0), replacing the depth comparison in the shader with a comparison of the depth from the shadow map with 1.0 shadows the entire scene, and writing random values to the depth map and then not calling glClear(GL_DEPTH_BUFFER_BIT) results in a random noisy pattern on the scene elements - from which we can infer that the uploading of the depth texture and comparison within the shader are functioning perfectly.) Since the problem appears almost certainly to be in the depth render, this is the code for that: const int s_res = 1024; GLuint shadowMap_tex; GLuint shadowMap_prog; GLint sm_attr_coord3d; GLint sm_uniform_mvp; GLuint fbo_handle; GLuint renderBuffer; bool isMappingShad = false; //The scene consists of a plane with box above it GLfloat scene[] = { -10.0, 0.0, -10.0, 0.5, 0.0, 10.0, 0.0, -10.0, 1.0, 0.0, 10.0, 0.0, 10.0, 1.0, 0.5, -10.0, 0.0, -10.0, 0.5, 0.0, -10.0, 0.0, 10.0, 0.5, 0.5, 10.0, 0.0, 10.0, 1.0, 0.5, ... }; //Initialize the stuff used by the shadow map generator int initShadowMap() { //Initialize the shadowMap shader program if (create_program("shadow.v.glsl", "shadow.f.glsl", shadowMap_prog) != 1) return -1; const char* attribute_name = "coord3d"; sm_attr_coord3d = glGetAttribLocation(shadowMap_prog, attribute_name); if (sm_attr_coord3d == -1) { fprintf(stderr, "Could not bind attribute %s\n", attribute_name); return 0; } const char* uniform_name = "mvp"; sm_uniform_mvp = glGetUniformLocation(shadowMap_prog, uniform_name); if (sm_uniform_mvp == -1) { fprintf(stderr, "Could not bind uniform %s\n", uniform_name); return 0; } //Create a framebuffer glGenFramebuffers(1, &fbo_handle); glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); //Create render buffer glGenRenderbuffers(1, &renderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); //Setup the shadow texture glGenTextures(1, &shadowMap_tex); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, s_res, s_res, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); return 0; } //Delete stuff void dnitShadowMap() { //Delete everything glDeleteFramebuffers(1, &fbo_handle); glDeleteRenderbuffers(1, &renderBuffer); glDeleteTextures(1, &shadowMap_tex); glDeleteProgram(shadowMap_prog); } int loadSMap() { //Bind MVP stuff glm::mat4 view = glm::lookAt(glm::vec3(10.0, 10.0, 5.0), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0)); glm::mat4 projection = glm::ortho<float>(-10,10,-8,8,-10,40); glm::mat4 mvp = projection * view; glm::mat4 biasMatrix( 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); glm::mat4 lsMVP = biasMatrix * mvp; //Upload light source matrix to the main shader programs glUniformMatrix4fv(uniform_ls_mvp, 1, GL_FALSE, glm::value_ptr(lsMVP)); glUseProgram(shadowMap_prog); glUniformMatrix4fv(sm_uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); //Draw to the framebuffer (with depth buffer only draw) glBindFramebuffer(GL_FRAMEBUFFER, fbo_handle); glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer); glBindTexture(GL_TEXTURE_2D, shadowMap_tex); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowMap_tex, 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLenum result = glCheckFramebufferStatus(GL_FRAMEBUFFER); if (GL_FRAMEBUFFER_COMPLETE != result) { printf("ERROR: Framebuffer is not complete.\n"); return -1; } //Draw shadow scene printf("Creating shadow buffers..\n"); int ticks = SDL_GetTicks(); glClear(GL_DEPTH_BUFFER_BIT); //Wipe the depth buffer glViewport(0, 0, s_res, s_res); isMappingShad = true; //DRAW glEnableVertexAttribArray(sm_attr_coord3d); glVertexAttribPointer(sm_attr_coord3d, 3, GL_FLOAT, GL_FALSE, 5*4, scene); glDrawArrays(GL_TRIANGLES, 0, 14*3); glDisableVertexAttribArray(sm_attr_coord3d); isMappingShad = false; glBindFramebuffer(GL_FRAMEBUFFER, 0); printf("Render Sbuf in %dms (GLerr: %d)\n", SDL_GetTicks() - ticks, glGetError()); return 0; } This is the full code for the POC shadow mapping project (C++) (Requires SDL 1.2, SDL-image 1.2, GLEW (1.5) and GLM development headers.) initShadowMap is called, followed by loadSMap, the scene is drawn from the camera POV and then dnitShadowMap is called. I followed this tutorial originally (Along with another more comprehensive tutorial which has disappeared as this guy re-configured his site but used to be here (404).) I've ensured that the scene is visible (as can be seen within the full project) to the light source (which uses an orthogonal projection matrix.) Shader utilities function fine in non-shadow-mapped projects. I should also note that at no point is the GL error state set. What am I doing wrong here and why did this not cause problems on my AMD card? (System: Ubuntu 12.04, Linux 3.2.0-49-generic, 64 bit, with the nvidia-experimental-310 driver package. All other games are functioning fine so it's most likely not a card/driver issue.)

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Branching and Merging Improvements in TFS2010

    - by jehan
    Introducing the concept of “first class branches” is a significant improvement as part of the 2010 release with respect to version control.  Not only does it help to distinguish between folders and branches, but it enables branch visualizations. Let us see improvements in detail. ·         In TFS2008, you don’t know which of the folders are Branches: All folders looks the same, all have the folder icon. Now, In TFS 2010 there is a new icon that shows which of the folder is a Branch.       ·      There is no visual means to manage branches in TFS2008:   You dont have any means to identify which branches are related and the relation type. Now, In TFS 2010 you have visual tools to see the Branches Hierarchy. In order to see a Branch Hierarchy just Right Click the Branch and choose: Branching and Merging –> View Hierarchy     ·         In TFS2008, there is no option to track changes path between the Branches:  If you have made a merge in a Branch you can’t track from which Branch this Merge came from. Now, you have the tools that shows the path of change between the Branches, you can also see where change was added on a timeline.  In order to track a change do the following: Step1: Right click the Branch and click View History   Step 2: Choose a changeset to track and click the “Track Changeset” button.     Step 3: Choose the branches that will be in the view and click “Visualize”. In above visual, you can see that Changesets 108,109,110 and 119 where merged from Main to Release1.0 Branch and then “Release_1.0” Branched to “Dev1.0. Step4: You can also see the Merges on a Timeline by clicking on the “Timeline Tracking” button.   Creating New Branches: In TFS 2010, the creation of branches has been streamlined a bit from the process in 2008.  In 2008, creating a new branch was like every other action in the system – changes were pended on the client, and then checked in to the server. Because of this creating new branch in TFS2008 was time-consuming process.  In TFS2010, the step where changes are pended has been bypassed and now performing the branch creation is entirely on the server.  With this approach, the round trip time for downloading a copy of each file on the branch and then uploading each file again has been eliminated.  Note: In TFS2010, the new branch will be created and committed as a single operation on the server. Pending changes will not be created, it doesn’t require a check-in as it will be carried out as a single operation and it’s not possible to cancel.     Manage Branch Permissions: The properties view for branches is also different than that of ordinary folders or file, containing some metadata for the branch, relationship information, and permissions for the branch. In TFS2008, the users who have checkout and Check-in permissions can create a branch. But, In TFS2010 you can control the permissions for Branches using Manage Branch permissions.   Reparent option in TFS2010: In TFS2008, if we have two branches which don’t have parent-child relation and we want perform merge between these two branches then we have to perform baseless merge using tf.exe command line. I have two branches Release_1.0 and Dev1.0_F2 which don’t have any relation between them, that’s why when I click on merge option in Release_1.0, in Target Branch it’s not showing Dev1.0_F2 branch to perform the merges.     Let us see what can we do for this in TFS2010, first perform a TFS baseless merge to establish a relationship between the parent branch and the child branches.  It will only merge the folder, not its contents. TFS baseless merges are performed via the command line using VS2010 command prompt and do the following:   tf merge /baseless <ParentBranch> <childBranch> Check in your pending changes. It will create the link between the branches but the relationships are still not completed.  Now, select the child branch in Source Control Explorer and from the File menu choose Source Control –> Branching and Merging –> Reparent.      In the dialog box,  choose the appropriate branch as the new parent.   Click Reparent and then go to parent branch and click merge. Now, will see that in Target Branch option Dev1.0_F2 branch is added.         Converting Folders to Branches and Branches to Folders: You can convert any Folder as Branch from Context Menu by performing right click on the folderàBranching and MergingàConvert to Branch. In similar way, you can convert the Branches to Folder using Convert to Folder option available in File Menu (FileàSource ControlàBranching and MergingàConvert to Branch). This option is not available in context menu.

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • What developer conferences are you going to this year?

    - by mbcrump
    This short list is what I consider to be the “cream-of-the-crop” in developer conferences. This is also a list of the conferences that I plan on attending in 2011. If you feel your conference is just as good, then shoot me an email at [michael[at]michaelcrump[dot]net, and if possible I will check it out.   In-Person Event Las Vegas on April 18th-22nd, 2011 Redmond on October 17th-21st, 2011 Orlando on December 5th-9th, 2011 Visual Studio Live – I attended this event in November of last year and blogged about my experience. I am also planning on going back to the Orlando session in December of this year. So what did I like the most about this event? Being able to interact one-on-one with a majority of the speakers. If you read my blog post then you will see a list of the speakers that I met up with. I also made a lot of great connections with other professional developers all over the world. They are having an event in Las Vegas on April 18th-22nd. I noticed at this event that they have added a new track on mobile. Being a big fan of mobile, I feel that this is a great move. They also have a great selection for Silverlight Developers including Billy Hollis and Rocky Lhotka. For the full lineup of conference tracks, sessions and speakers visit http://bit.ly/VSLiveTrks. If you are interested in this then you can register here by February 16th. I must add that you can save $300 bucks by getting the early-bird special.   Virtual Conference SSWUG (DBTechCon) - holds the largest virtual conference in the information technology industry. It is also special to me because they selected a majority of my Silverlight content for the April conference. No traveling fees and all of the sessions are recorded so you can watch them on-demand for $189 bucks (early-bird special). For the entire speaker list then click here. The session list has also been published. If you are interested in this then you can register here.   In-Person Event Knoxville, TN on June 3rd/4th 2011. Codestock.org – If you live in the South then you have heard of CodeStock. To my knowledge, they have only had 3 events so far and they were a huge success. It was such a success that after the last event, everyone was telling me how good it was and how much they enjoyed it. They currently have a call for speakers going on right now, so if you have sessions then be sure to submit yours. So, what makes them stand out? Well for starters Michael Neal (organizer) developed an open API so conference attendees could build their own apps for the sessions. They also encouraged their speakers to go to other sessions instead of stay in a “speaker-room”. Another cool feature is that they are uploading videos from the conference so everyone can benefit. They are currently looking for sponsorship, so help out if you can.   In-Person Event Redmond, WA on October 28/29 2011 *NOT 100% SURE AT THIS POINT* PDC 11 – OK, so the logo should be pdc11 but its not out yet. This event is located on Microsoft’s campus in Redmond, WA. It is probably one of the most well known conferences for developers to attend. One of the big perks from this event is that you typically come away with free stuff. In 2010 they gave away Windows 7 Phones. I remember years earlier they gave away laptops. This of course isn’t the only reason to go, you may get to tour the Microsoft campus. Since pdc is a huge event, you can view all the events for free. Mike Taulty created a nice Silverlight application that consumes the OData feed. You can download it here. If everything goes as planned, I will be at all of these events. If you plan on going then send me a tweet and we will do lunch or dinner. I love meeting new developers and talking .net.  Subscribe to my feed

    Read the article

  • Objects won't render when Texture Compression + Mipmapping is Enabled

    - by felipedrl
    I'm optimizing my game and I've just implemented compressed (DXTn) texture loading in OpenGL. I've worked my way removing bugs but I can't figure out this one: objects w/ DXTn + mipmapped textures are not being rendered. It's not like they are appearing with a flat color, they just don't appear at all. DXTn textured objs render and mipmapped non-compressed textures render just fine. The texture in question is 256x256 I generate the mips all the way down 4x4, i.e 1 block. I've checked on gDebugger and it display all the levels (7) just fine. I'm using GL_LINEAR_MIPMAP_NEAREST for min filter and GL_LINEAR for mag one. The texture is being compressed and mipmaps being created offline with Paint.NET tool using super sampling method. (I also tried bilinear just in case) Source follow: [SNIPPET 1: Loading DDS into sys memory + Initializing Object] // Read header DDSHeader header; file.read(reinterpret_cast<char*>(&header), sizeof(DDSHeader)); uint pos = static_cast<uint>(file.tellg()); file.seekg(0, std::ios_base::end); uint dataSizeInBytes = static_cast<uint>(file.tellg()) - pos; file.seekg(pos, std::ios_base::beg); // Read file data mData = new unsigned char[dataSizeInBytes]; file.read(reinterpret_cast<char*>(mData), dataSizeInBytes); file.close(); mMipmapCount = header.mipmapcount; mHeight = header.height; mWidth = header.width; mCompressionType = header.pf.fourCC; // Only support files divisible by 4 (for compression blocks algorithms) massert(mWidth % 4 == 0 && mHeight % 4 == 0); massert(mCompressionType == NO_COMPRESSION || mCompressionType == COMPRESSION_DXT1 || mCompressionType == COMPRESSION_DXT3 || mCompressionType == COMPRESSION_DXT5); // Allow textures up to 65536x65536 massert(header.mipmapcount <= MAX_MIPMAP_LEVELS); mTextureFilter = TextureFilter::LINEAR; if (mMipmapCount > 0) { mMipmapFilter = MipmapFilter::NEAREST; } else { mMipmapFilter = MipmapFilter::NO_MIPMAP; } mBitsPerPixel = header.pf.bitcount; if (mCompressionType == NO_COMPRESSION) { if (header.pf.flags & DDPF_ALPHAPIXELS) { // The only format supported w/ alpha is A8R8G8B8 massert(header.pf.amask == 0xFF000000 && header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGBA8; mFormat = GL_BGRA; mDataType = GL_UNSIGNED_BYTE; } else { massert(header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGB8; mFormat = GL_BGR; mDataType = GL_UNSIGNED_BYTE; } } else { uint blockSizeInBytes = 16; switch (mCompressionType) { case COMPRESSION_DXT1: blockSizeInBytes = 8; if (header.pf.flags & DDPF_ALPHAPIXELS) { mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT; } else { mInternalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT; } break; case COMPRESSION_DXT3: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT; break; case COMPRESSION_DXT5: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT; break; default: // Not Supported (DXT2, DXT4 or any compression format) massert(false); } } [SNIPPET 2: Uploading into video memory] massert(mData != NULL); glGenTextures(1, &mHandle); massert(mHandle!=0); glBindTexture(GL_TEXTURE_2D, mHandle); commitFiltering(); uint offset = 0; Renderer* renderer = Renderer::getInstance(); switch (mInternalFormat) { case GL_RGB: case GL_RGBA: case GL_RGB8: case GL_RGBA8: for (uint i = 0; i < mMipmapCount + 1; ++i) { uint width = std::max(1U, mWidth >> i); uint height = std::max(1U, mHeight >> i); glTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, mFormat, mDataType, &mData[offset]); offset += width * height * (mBitsPerPixel / 8); } break; case GL_COMPRESSED_RGB_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT3_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT5_EXT: { uint blockSize = 16; if (mInternalFormat == GL_COMPRESSED_RGB_S3TC_DXT1_EXT || mInternalFormat == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) { blockSize = 8; } uint width = mWidth; uint height = mHeight; for (uint i = 0; i < mMipmapCount + 1; ++i) { uint nBlocks = ((width + 3) / 4) * ((height + 3) / 4); // Only POT textures allowed for mipmapping massert(width % 4 == 0 && height % 4 == 0); glCompressedTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, nBlocks * blockSize, &mData[offset]); offset += nBlocks * blockSize; if (width <= 4 && height <= 4) { break; } width = std::max(4U, width / 2); height = std::max(4U, height / 2); } break; } default: // Not Supported massert(false); } Also I don't understand the "+3" in the block size computation but looking for a solution for my problema I've encountered people defining it as that. I guess it won't make a differente for POT textures but I put just in case. Thanks.

    Read the article

  • iOS Support with Windows Azure Mobile Services – now with Push Notifications

    - by ScottGu
    A few weeks ago I posted about a number of improvements to Windows Azure Mobile Services. One of these was the addition of an Objective-C client SDK that allows iOS developers to easily use Mobile Services for data and authentication.  Today I'm excited to announce a number of improvement to our iOS SDK and, most significantly, our new support for Push Notifications via APNS (Apple Push Notification Services).  This makes it incredibly easy to fire push notifications to your iOS users from Windows Azure Mobile Service scripts. Push Notifications via APNS We've provided two complete tutorials that take you step-by-step through the provisioning and setup process to enable your Windows Azure Mobile Service application with APNS (Apple Push Notification Services), including all of the steps required to configure your application for push in the Apple iOS provisioning portal: Getting started with Push Notifications - iOS Push notifications to users by using Mobile Services - iOS Once you've configured your application in the Apple iOS provisioning portal and uploaded the APNS push certificate to the Apple provisioning portal, it's just a matter of uploading your APNS push certificate to Mobile Services using the Windows Azure admin portal: Clicking the “upload” within the “Push” tab of your Mobile Service allows you to browse your local file-system and locate/upload your exported certificate.  As part of this you can also select whether you want to use the sandbox (dev) or production (prod) Apple service: Now, the code to send a push notification to your clients from within a Windows Azure Mobile Service is as easy as the code below: push.apns.send(deviceToken, {      alert: 'Toast: A new Mobile Services task.',      sound: 'default' }); This will cause Windows Azure Mobile Services to connect to APNS (Apple Push Notification Service) and send a notification to the iOS device you specified via the deviceToken: Check out our reference documentation for full details on how to use the new Windows Azure Mobile Services apns object to send your push notifications. Feedback Scripts An important part of working with any PNS (Push Notification Service) is handling feedback for expired device tokens and channels. This typically happens when your application is uninstalled from a particular device and can no longer receive your notifications. With Windows Notification Services you get an instant response from the HTTP server.  Apple’s Notification Services works in a slightly different way and provides an additional endpoint you can connect to poll for a list of expired tokens. As with all of the capabilities we integrate with Mobile Services, our goal is to allow developers to focus more on building their app and less on building infrastructure to support their ideas. Therefore we knew we had to provide a simple way for developers to integrate feedback from APNS on a regular basis.  This week’s update now includes a new screen in the portal that allows you to optionally provide a script to process your APNS feedback – and it will be executed by Mobile Services on an ongoing basis: This script is invoked periodically while your service is active. To poll the feedback endpoint you can simply call the apns object's getFeedback method from within this script: push.apns.getFeedback({       success: function(results) {           // results is an array of objects with a deviceToken and time properties      } }); This returns you a list of invalid tokens that can now be removed from your database. iOS Client SDK improvements Over the last month we've continued to work with a number of iOS advisors to make improvements to our Objective-C SDK. The SDK is being developed under an open source license (Apache 2.0) and is available on github. Many of the improvements are behind the scenes to improve performance and memory usage. However, one of the biggest improvements to our iOS Client API is the addition of an even easier login method.  Below is the Objective-C code you can now write to invoke it: [client loginWithProvider:@"twitter"                     onController:self                        animated:YES                      completion:^(MSUser *user, NSError *error) {      // if no error, you are now logged in via twitter }]; This code will automatically present and dismiss our login view controller as a modal dialog on the specified controller.  This does all the hard work for you and makes login via Twitter, Google, Facebook and Microsoft Account identities just a single line of code. My colleague Josh just posted a short video demonstrating these new features which I'd recommend checking out: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • How to upload web.config file using WebDAV on IIS7?

    - by Martin Liversage
    I want to copy an ASP.NET MVC website to a remote IIS 7 server using WebDAV. I have created a site in IIS, enabled WebDAV and assigned a special application pool I have named "WebDAV Application Pool". Using a Windows 7 or Vista client I'm able to mount the remote site as a network drive. So far, so good. However, I have problems uploading web.config files to the remote site. One problem is that as soon as a web.config has been uploaded it is used to configure the WebDAV site. The web.config file in a Views folder of a MVC project effectively blocks access to that folder. To work around this problem I have configured the application pool in the applicationHost.config file: <configuration> <applicationPools> <add name="WebDAV Application Pool" autoStart="true" enableConfigurationOverride="false" /> </applicationPools> </configuration> The interesting part is the 'enableConfigurationOverride` attribute: When true, indicates that delegated settings in Web.config files will processed for applications within this application pool. When false, all settings in Web.config files will be ignored for this application pool. Doing this makes it possible to upload a web.config file to the Views folder without breaking access to the folder. However, I'm still unable to upload a web.config file to the root folder. I have the following settings in the applicationHost.config file to ensure that request filtering doesn't interfere with WebDAV: <configuration> <location path="webdav.mysite.tld"> <system.webServer> <security> <requestFiltering> <fileExtensions applyToWebDAV="false" /> <verbs applyToWebDAV="false" /> <hiddenSegments applyToWebDAV="false" /> </requestFiltering> </security> </system.webServer> </location> </configuration> In particular hiddenSegments will normally block access to web.config but setting the applyToWebDAV attribute to false should ensure that this file isn't blocked when using WebDAV. Unfortunately, I'm still unable to copy my web.config file to the root folder of the site. Doing drag and drop in Windows Explorer to the mapped WebDAV network drive will result in the following error message: Error 0x80070057: The parameter is incorrect. On the wire it seems that the HTTP status 400 Bad Request is returned. Is there anything I can do to configure WebDAV on IIS 7 to avoid this problem?

    Read the article

  • WCF ReliableSession and Timeouts

    - by user80108
    I have a WCF service used mainly for managing documents in a repository. I used the chunking channel sample from MS so that I could upload/download huge files. Now I implemented reliable session with the service and I am seeing some strange behaviors. Here are the timeout values I am using. this.SendTimeout = new TimeSpan(0,10,0); this.OpenTimeout = new TimeSpan(0, 1, 0); this.CloseTimeout = new TimeSpan(0, 1, 0); this.ReceiveTimeout = new TimeSpan(0,10, 0); reliableBe.InactivityTimeout = new TimeSpan(0,2,0); I have the following issues. 1. If the Service is not up & running, the clients are not get disconnected after OpenTimeout. I tried it with my test client. Scenario 1: Without Reliable Session: I get the following exception: Could not connect to net.tcp://localhost:8788/MediaManagementService/ep1. The connection attempt lasted for a time span of 00:00:00.9848790. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8788 This is the correct behavior as I have given the OpenTimeout as 1 sec. Scenario 2: With ReliableSession: I get the same exception: Could not connect to net.tcp://localhost:8788/MediaManagementService/ep1. The connection attempt lasted for a time span of 00:00:00.9692460. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8788. But this message comes after around 10 mintes . (I believe after SendTimeout) So here I just have enabled the reliable session and now it looks like the OpenTimeout = SendTimeout for the client. Is this desired behavior? 2: Issue while uploading huge files with ReliableSession: The general rule is that you have to set a huge value for the maxReceivedMessageSize, SendTimeout and ReceiveTimeout. But in the case of Chunking channel, the max received message size doesn't matter as the data is sent in chunks. So I set a huge value for Send and ReceiveTimeout : say 10 hours. Now the upload is going fine, but it has a side effect that, even if the Service is not up, it takes 10 hours to timeout the client connection due to the behavior mentioned in (1). Please let me know your thoughts on this behavior.

    Read the article

  • GoogleAppEngine : possible to disable FileUpload?

    - by James.Elsey
    Hi, When I deploy my application to GoogleAppEngine I keep getting the following error Uncaught exception from servlet java.lang.NoClassDefFoundError: java.io.FileOutputStream is a restricted class. Please see the Google App Engine developer's guide for more details. at com.google.apphosting.runtime.security.shared.stub.java.io.FileOutputStream.<clinit>(FileOutputStream.java) at org.apache.log4j.FileAppender.setFile(FileAppender.java:289) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:163) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:256) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:132) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:96) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:654) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:612) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:509) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:415) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:441) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:468) at org.apache.log4j.LogManager.<clinit>(LogManager.java:122) at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:88) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:131) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) at org.springframework.web.context.ContextLoader.<clinit>(ContextLoader.java:146) at org.springframework.web.context.ContextLoaderListener.createContextLoader(ContextLoaderListener.java:53) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:44) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:548) at org.mortbay.jetty.servlet.Context.startContext(Context.java:136) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.createHandler(AppVersionHandlerMap.java:191) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.getHandler(AppVersionHandlerMap.java:168) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:123) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:243) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5485) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5483) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:398) at com.google.net.rpc.impl.Server$2.run(Server.java:852) at com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56) at com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:536) at com.google.net.rpc.impl.Server.startRpc(Server.java:807) at com.google.net.rpc.impl.Server.processRequest(Server.java:369) at com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:442) at com.google.net.rpc.impl.RpcConnection.parseMessages(RpcConnection.java:319) at com.google.net.rpc.impl.RpcConnection.dataReceived(RpcConnection.java:290) at com.google.net.async.Connection.handleReadEvent(Connection.java:474) at com.google.net.async.EventDispatcher.processNetworkEvents(EventDispatcher.java:831) at com.google.net.async.EventDispatcher.internalLoop(EventDispatcher.java:207) at com.google.net.async.EventDispatcher.loop(EventDispatcher.java:103) at com.google.net.rpc.RpcService.runUntilServerShutdown(RpcService.java:251) at com.google.apphosting.runtime.JavaRuntime$RpcRunnable.run(JavaRuntime.java:404) at java.lang.Thread.run(Unknown Source) I've checked the documentation and it suggests to create a FileUpload class, since I won't be uploading files/documents etc from my application, is this necessary? Is there a way to disable this functionality, or at least bypass this error? I have already provided implementation for a MultipartWrapperFactory.Class as that has been suggested from searching for this error Thanks

    Read the article

  • Azure storage: Uploaded files with size zero bytes

    - by Fabio Milheiro
    When I upload an image file to a blob, the image is uploaded apparently successfully (no errors). When I go to cloud storage studio, the file is there, but with a size of 0 (zero) bytes. The following is the code that I am using: // These two methods belong to the ContentService class used to upload // files in the storage. public void SetContent(HttpPostedFileBase file, string filename, bool overwrite) { CloudBlobContainer blobContainer = GetContainer(); var blob = blobContainer.GetBlobReference(filename); if (file != null) { blob.Properties.ContentType = file.ContentType; blob.UploadFromStream(file.InputStream); } else { blob.Properties.ContentType = "application/octet-stream"; blob.UploadByteArray(new byte[1]); } } public string UploadFile(HttpPostedFileBase file, string uploadPath) { if (file.ContentLength == 0) { return null; } string filename; int indexBar = file.FileName.LastIndexOf('\\'); if (indexBar > -1) { filename = DateTime.UtcNow.Ticks + file.FileName.Substring(indexBar + 1); } else { filename = DateTime.UtcNow.Ticks + file.FileName; } ContentService.Instance.SetContent(file, Helper.CombinePath(uploadPath, filename), true); return filename; } // The above code is called by this code. HttpPostedFileBase newFile = Request.Files["newFile"] as HttpPostedFileBase; ContentService service = new ContentService(); blog.Image = service.UploadFile(newFile, string.Format("{0}{1}", Constants.Paths.BlogImages, blog.RowKey)); Before the image file is uploaded to the storage, the Property InputStream from the HttpPostedFileBase appears to be fine (the size of the of image corresponds to what is expected! And no exceptions are thrown). And the really strange thing is that this works perfectly in other cases (uploading Power Points or even other images from the Worker role). The code that calls the SetContent method seems to be exactly the same and file seems to be correct since a new file with zero bytes is created at the correct location. Does any one have any suggestion please? I debugged this code dozens of times and I cannot see the problem. Any suggestions are welcome! Thanks

    Read the article

  • Does writing data to server using Java URL class require response from server?

    - by gigadot
    I am trying to upload files using Java URL class and I have found a previous question on stack-overflow which explains very well about the details, so I try to follow it. And below is my code adopted from the sniplet given in the answer. My problem is that if I don't make a call to one of connection.getResponseCode() or connection.getInputStream() or connection.getResponseMessage() or anything which is related to reponse from the server, the request will never be sent to server. Why do I need to do this? Or is there any way to write the data without getting the response? P.S. I have developed a server-side uploading servlet which accepts multipart/form-data and save it to files using FileUpload. It is stable and definitely working without any problem so this is not where my problem is generated. import java.io.Closeable; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.OutputStream; import java.io.PrintWriter; import java.net.HttpURLConnection; import java.net.URL; import org.apache.commons.io.IOUtils; public class URLUploader { public static void closeQuietly(Closeable... objs) { for (Closeable closeable : objs) { IOUtils.closeQuietly(closeable); } } public static void main(String[] args) throws IOException { File textFile = new File("D:\\file.zip"); String boundary = Long.toHexString(System.currentTimeMillis()); // Just generate some unique random value. HttpURLConnection connection = (HttpURLConnection) new URL("http://localhost:8080/upslet/upload").openConnection(); connection.setDoOutput(true); connection.setRequestProperty("Content-Type", "multipart/form-data; boundary=" + boundary); OutputStream output = output = connection.getOutputStream(); PrintWriter writer = writer = new PrintWriter(output, true); // Send text file. writer.println("--" + boundary); writer.println("Content-Disposition: form-data; name=\"file1\"; filename=\"" + textFile.getName() + "\""); writer.println("Content-Type: application/octet-stream"); FileInputStream fin = new FileInputStream(textFile); writer.println(); IOUtils.copy(fin, output); writer.println(); // End of multipart/form-data. writer.println("--" + boundary + "--"); output.flush(); closeQuietly(fin, writer, output); // Above request will never be sent if .getInputStream() or .getResponseCode() or .getResponseMessage() does not get called. connection.getResponseCode(); } }

    Read the article

  • .exe File becomes corrupted when downloaded from server

    - by Kerri
    Firstly: I'm a lowly web designer who knows just enough PHP to be dangerous and just enough about server administration to be, well, nothing. I probably won't understand you unless you're very clear! The setup: I've set up a website where the client uploads files to a specific directory, and those files are made available, through php, for download by users. The files are generally executable files over 50MB. The client does not want them zipped, as they feel their users aren't savvy enough to unzip them. I'm using the php below to force a download dialogue box and hide the directory where the files are located. It's Linux server, if that makes a difference. The problem: There is a certain file that becomes corrupt after the user tries to download it. It is an executable file, but when it's clicked on, a blank DOS window opens up. The original file, prior to download opens perfectly. There are several other similar files that go through the same exact download procedure, and all of those work just fine. Things I've tried: I've tried uploading the file zipped, then unzipping it on the server to make sure it wasn't becoming corrupt during upload, and no luck. I've also compared the binary code of the original file to the downloaded file that doesn't work, and their exactly the same (so the php isn't accidentally inserting anything extra into the file). Could it be an issue with the headers in my downloadFile function? I really am not sure how to troubleshoot this one… This is the download php, if it's relevant ($filenamereplace is defined elsewhere): downloadFile("../DOWNLOADS/files/$filenamereplace","$filenamereplace"); function downloadFile($file,$filename){ if(file_exists($file)) { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$filename.'"'); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); @ flush(); readfile($file); exit; } }

    Read the article

  • Sharing a file from Android to Gmail or to Dropbox

    - by Calaf
    To share a simple text file, I started by copying verbatim from FileProvider's manual page: <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <provider android:name="android.support.v4.content.FileProvider" android:authorities="com.mycorp.helloworldtxtfileprovider.MainActivity" android:exported="false" android:grantUriPermissions="true" > <meta-data android:name="android.support.FILE_PROVIDER_PATHS" android:resource="@xml/my_paths" /> </provider> <activity android:name="com.mycorp.helloworldtxtfileprovider.MainActivity" ... Then I saved a text file and used, again nearly verbatim, the code under Sending binary content. (Notice that this applies more accurately in this case than "Sending text content" since we are sending a file, which happens to be a text file, rather than just a string of text.) For the convenience of duplication on your side, and since the code is in any case so brief, I'm including it here in full. public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); String filename = "hellow.txt"; String fileContents = "Hello, World!\n"; byte[] bytes = fileContents.getBytes(); FileOutputStream fos = null; try { fos = this.openFileOutput(filename, MODE_PRIVATE); fos.write(bytes); } catch (IOException e) { e.printStackTrace(); } finally { try { fos.close(); } catch (IOException e) { e.printStackTrace(); } } File file = new File(filename); Intent shareIntent = new Intent(); shareIntent.setAction(Intent.ACTION_SEND); shareIntent.putExtra(Intent.EXTRA_STREAM, Uri.fromFile(file)); shareIntent.setType("application/txt"); startActivity(Intent.createChooser(shareIntent, getResources().getText(R.string.send_to))); file.delete(); } } Aside from adding a value for send_to in res/values/strings.xml, the only other change I did to the generic Hello, World that Eclipse creates is to add the following in res/xml/my_paths.xml (as described on the page previously referenced. <paths xmlns:android="http://schemas.android.com/apk/res/android"> <Files-path name="files" path="." /> </paths> This code runs fine. It shows a list of intent recipients. But sending the text file to either Dropbox or to Gmail fails. Dropbox sends the notification "Uploading to Dropbox" followed by "Upload failed: my_file.txt". After "sending message.." Gmail sends "Couldn't send attachment". What is wrong?

    Read the article

  • Which credentials should I put in for Google App Engine BulkLoader at development server?

    - by Hoang Pham
    Hello everyone, I would like to ask which kind of credentials do I need to put on for importing data using the Google App Engine BulkLoader class appcfg.py upload_data --config_file=models.py --filename=listcountries.csv --kind=CMSCountry --url=http://localhost:8178/remote_api vit/ And then it asks me for credentials: Please enter login credentials for localhost Here is an extraction of the content of the models.py, I use this listcountries.csv file class CMSCountry(db.Model): sortorder = db.StringProperty() name = db.StringProperty(required=True) formalname = db.StringProperty() type = db.StringProperty() subtype = db.StringProperty() sovereignt = db.StringProperty() capital = db.StringProperty() currencycode = db.StringProperty() currencyname = db.StringProperty() telephonecode = db.StringProperty() lettercode = db.StringProperty() lettercode2 = db.StringProperty() number = db.StringProperty() countrycode = db.StringProperty() class CMSCountryLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'CMSCountry', [('sortorder', str), ('name', str), ('formalname', str), ('type', str), ('subtype', str), ('sovereignt', str), ('capital', str), ('currencycode', str), ('currencyname', str), ('telephonecode', str), ('lettercode', str), ('lettercode2', str), ('number', str), ('countrycode', str) ]) loaders = [CMSCountryLoader] Every tries to enter the email and password result in "Authentication Failed", so I could not import the data to the development server. I don't think that I have any problem with my files neither my models because I have successfully uploaded the data to the appspot.com application. So what should I put in for localhost credentials? I also tried to use Eclipse with Pydev but I still got the same message :( Here is the output: Uploading data records. [INFO ] Logging to bulkloader-log-20090820.121659 [INFO ] Opening database: bulkloader-progress-20090820.121659.sql3 [INFO ] [Thread-1] WorkerThread: started [INFO ] [Thread-2] WorkerThread: started [INFO ] [Thread-3] WorkerThread: started [INFO ] [Thread-4] WorkerThread: started [INFO ] [Thread-5] WorkerThread: started [INFO ] [Thread-6] WorkerThread: started [INFO ] [Thread-7] WorkerThread: started [INFO ] [Thread-8] WorkerThread: started [INFO ] [Thread-9] WorkerThread: started [INFO ] [Thread-10] WorkerThread: started Password for [email protected]: [DEBUG ] Configuring remote_api. url_path = /remote_api, servername = localhost:8178 [DEBUG ] Bulkloader using app_id: abc [INFO ] Connecting to /remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 2802, in Run request_manager.Authenticate() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 1126, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 488, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 381, in open response = self._open(req, data) File "C:\Python25\lib\urllib2.py", line 399, in _open '_open', req) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 1107, in http_open return self.do_open(httplib.HTTPConnection, req) File "C:\Python25\lib\urllib2.py", line 1082, in do_open raise URLError(err) URLError: <urlopen error (10061, 'Connection refused')> [INFO ] Authentication Failed Thank you!

    Read the article

  • Is Zend Framework a total waste of my time?

    - by Citizen
    Ok, I'm about 50% done with the "30 minute" quickstart guide from Zend. I must be missing something, because this seems like a total waste of time. The point of this quickguide is to create a guestbook, something I could do in 5 minutes with regular naked non-framework php. Here's my path to zend framework: c:/program files/wamp/www/_zend/ Here's my path to my quickstart project: c:/program files/wamp/www/_zend/bin/quickstart/ I have a number of questions at this point: http://framework.zend.com/docs/quickstart/create-a-model-and-database-table 1: I'm running the command line to run my database loading script. I get an error stating the it can't find the Zend/AutoLoader.php becuase my path to the zend library is wrong. I followed all of the steps. I defined the path to my zend library in the main config file, but for some reason, its defined again in my db loader. In all of these scripts that they have me load, it points the relative path to the zend library as being /../library Problem is, there's nothing in that folder. To get to my actual zend folder, you'd need to be (relatively) /../../../../library Which brings me to my 2nd question: 2: Where the #$#$ is the main Zend files supposed to be? The install directions were basically "put it wherever you want", when the real answer (after a bunch of errors and wasted time was) was "put it somewhere so that its really easy to type the full path a thousand times in command line" and "it also better be in a runnable place on your webserver since its going to create your quickstart application in a subdirectory within zend". Which brings us to the third question 3: Am I supposed to have this libary in both the parent core Zend (wamp/_zend/library) AND my application (quickstart/library)? 4: If that is the case, it seems like a ton of wasted files to be uploading. I'd like to use Zend to create products that my customers will download. 5 megs of overhead seems like a bit much. Zend claims you can use these library components separately, but it looks to me like I'm going to have to upload them every time. Which leads to the next question: 5: It appears that perhaps Zend is more for a single application that is not supposed to be distributed. Is this not the case? 6: According to their default file structure everything but my /public folder would be above public_html on my server if I wanted this to rest on my TLD. I would need to rename every reference of /public/ to /public_html/, or am I missing something else?

    Read the article

  • How can I easily maintain a cross-file JavaScript Library Development Environment

    - by John
    I have been developing a new JavaScript application which is rapidly growing in size. My entire JavaScript Application has been encapsulated inside a single function, in a single file, in a way like this: (function(){ var uniqueApplication = window.uniqueApplication = function(opts){ if (opts.featureOne) { this.featureOne = new featureOne(opts.featureOne); } if (opts.featureTwo) { this.featureTwo = new featureTwo(opts.featureTwo); } if (opts.featureThree) { this.featureThree = new featureThree(opts.featureThree); } }; var featureOne = function(options) { this.options = options; }; featureOne.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureTwo = function(options) { this.options = options; }; featureTwo.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureThree = function(options) { this.options = options; }; featureThree.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; })(); In the same file after the anonymous function and execution I do something like this: (function(){ var instanceOfApplication = new uniqueApplication({ featureOne:"dataSource", featureTwo:"drawingCanvas", featureThree:3540 }); })(); Before uploading this software online I pass my JavaScript file, and all it's dependencies, into Google Closure Compiler, using just the default Compression, and then I have one nice JavaScript file ready to go online for production. This technique has worked marvelously for me - as it has created only one global footprint in the DOM and has given me a very flexible framework to grow each additional feature of the application. However - I am reaching the point where I'd really rather not keep this entire application inside one JavaScript file. I'd like to move from having one large uniqueApplication.js file during development to having a separate file for each feature in the application, featureOne.js - featureTwo.js - featureThree.js Once I have completed offline development testing, I would then like to use something, perhaps Google Closure Compiler, to combine all of these files together - however I want these files to all be compiled inside of that scope, as they are when I have them inside one file - and I would like for them to remain in the same scope during offline testing too. I see that Google Closure Compiler supports an argument for passing in modules but I haven't really been able to find a whole lot of information on doing something like this. Anybody have any idea how this could be accomplished - or any suggestions on a development practice for writing a single JavaScript Library across multiple files that still only leaves one footprint on the DOM?

    Read the article

  • Displaying a message in a dialog box using AJAX, jQuery, and CakePHP

    - by LainIwakura
    I have a form, and when users submit this form, it should pass the data along to a function using AJAX. Then, the result of that is displayed to the user in a dialog box. I'm using CakePHP (1.3) and jQuery to try and accomplish this but I feel like I'm running into the ground. The form will eventually be used for uploading images with tags, but for now I just want to see a message pop up in the box.. The form: <?php echo $this->Form->create('Image', array('type' => 'file', 'controller' => 'images', 'action' => 'upload', 'method' => 'post')); echo $this->Form->input('Wallpaper', array('type' => 'file')); echo $this->Form->input('Tags'); echo $this->Form->end('Upload!'); ?> The AJAX: $(document).ready(function() { $("#ImageUploadForm").submit(function() { $.ajax({ type: "POST", url: "/images/upload/", data: $(this).serialize(), async: false, success: function(html){ $("#dialog-modal").dialog({ $("#dialog-modal").append("<p>"+html+"</p>"); height: 140, modal: true, buttons: { Ok: function() { $(this).dialog('close'); } } }) } }); return false; }); }); NOTE: if I put $("#dialog-modal").dialog({ height: 140, modal: true }); OUTSIDE of the $.ajax but inside the $("#ImageUploadForm").submit(function() { and comment out the $.ajax stuff, I WILL see a dialog box pop up and then I have to click it for it to go away. After this, it will not forward to the location /images/upload/ The method that AJAX calls: public function upload() { $this->autoRender = false; if ($this->RequestHandler->isAjax()) { echo 'Hi!'; exit(); } } $this-RequestHandler-isAjax() seems to do either absolutely nothing, or it is always returning false. I have never entered an if statement with that as the condition. Thanks for all the help, if you need more information let me know.

    Read the article

  • Maven mercurial extension constantly fails

    - by TheLQ
    After 2+ hours I was able to get the maven-scm-provider-hg extension (for pushing to mercurial repos from Maven) semi working, meaning that it was executing commands instead of just giving errors. However I think I've run into a wall with this error [INFO] [deploy:deploy {execution: default-deploy}] [INFO] Retrieving previous build number from pircbotx.googlecode.com [INFO] Removing C:\DOCUME~1\Owner\LOCALS~1\Temp\wagon-scm1210107000.checkout\pir cbotx\pircbotx\1.3-SNAPSHOT [INFO] EXECUTING: cmd.exe /X /C "hg clone -r tip https://*SNIP*@site.pircbotx.googlecode.com/hg/maven2/snapshots/pircbotx/pircbotx/1.3-SNAPSHOT C:\DOCUME~1\Owner\LOCALS~1\Temp\wagon-scm1210107000.checkout\pircbotx\pircbotx\1.3-SNAPSHOT" [INFO] EXECUTING: cmd.exe /X /C "hg locate" [INFO] repository metadata for: 'snapshot pircbotx:pircbotx:1.3-SNAPSHOT' could not be found on repository: pircbotx.googlecode.com, so will be created Uploading: scm:hg:https://site.pircbotx.googlecode.com/hg/maven2/snapshots/pircbotx/pircbotx/1.3-SNAPSHOT/pircbotx-1.3-SNAPSHOT.jar [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error deploying artifact: Error listing repository: No such command 'list'. What on earth would cause that error? I'm on a Windows box, so any commands that aren't commands give "'list' is not recognized as an internal or external command...", not "No such command 'list'." POM <build> <extensions> <extension> <groupId>org.apache.maven.scm</groupId> <artifactId>maven-scm-provider-hg</artifactId> <version>1.4</version> </extension> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-scm</artifactId> <version>1.0-beta-7</version> </extension> </extensions> ... <distributionManagement> <snapshotRepository> <id>pircbotx.googlecode.com</id> <name>PircBotX Site</name> <url>scm:hg:https://site.pircbotx.googlecode.com/hg/maven2/snapshots</url> <uniqueVersion>false</uniqueVersion> </snapshotRepository> </distributionManagement> Mercurial version W:\programming\pircbot-hg>hg version Mercurial Distributed SCM (version 1.7.2) Any suggestions?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55  | Next Page >