Http 服务器,监控 path/folder

Http-Server, Monitoring a path/folder

我不确定从哪里开始,我从模板管理代码。使用下面的代码,我可以从 Http 服务器端下载所有文件。它会检查是否已下载,如果已下载则不会从网站上获取它。我只想下载部分文件。我正在尝试想出一个简单的解决方案来实现以下几点之一:

  1. 获取Server-Http 上的最后修改数据或最后创建时间。我了解如何从文件夹执行此操作,但我不想下载文件然后检查它,我需要在服务器上执行此操作。 Onlocal pc 将是 FileInfo infoSource = new FileInfo(sourceDir);,然后是 infoSource.CreationTime,其中 sourceDir 是文件路径。 http 上可能有类似的东西吗?
  2. 仅从服务器站点获取最新的 10 个文件。没有最新的,但是最新的 10.
  3. 监控服务器站点,一旦有文件 MyFileName_Version 放在站点上,它将获取具有此命名约定的最新文件。

这些方法中的任何一种都适合我,但我在这些方面仍然是新手,所以在这里挣扎。 目前我有以下代码:

using System;
using System.Diagnostics;
using System.IO;
using System.Net;
using System.Security.Cryptography.X509Certificates;
using System.Text.RegularExpressions;
using Topshelf;

namespace AutomaticUpgrades
{
    class Program
    {
        static void Main(string[] args)
        {
            // This path containts of the Site, Then binary-release/, The
            string url = "HTTP://LOCALHOUST:1000000";
           
            DownloadDataFromArtifactory(url);

        }

        private static void DownloadDataFromArtifactory(string url)
        {
           HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
            using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
            {
                using (StreamReader reader = new StreamReader(response.GetResponseStream()))
                {
                    string html = reader.ReadToEnd();
                    Regex regex = new Regex(GetDirectoryListingRegexForUrl(url));
                    MatchCollection matches = regex.Matches(html);
                    if (matches.Count > 0)
                    {
                        WebClient webClient = new WebClient();
                        foreach (Match match in matches)
                        {
                            
                            if (match.Success)
                            {
                                Console.WriteLine(match.Groups["name"]);
                                //"C:\Users\RLEBEDEVS\Desktop\sourceFolder\Http-server Download"
                                if (match.Groups["name"].Length > 5 
                                    && DupeFile(match.Groups["name"].ToString(), 
                                    "C:\Users\RLEBEDEVS\Desktop\sourceFolder\Http-server Download")
                                    )
                                {
                                    webClient.DownloadFile("HTTP://LOCALHOUST:1000000" + match.Groups["name"], "C:\Users\RLEBEDEVS\Desktop\sourceFolder\Http-server Download\" + match.Groups["name"]);
                                }
                               

                            }
                        }
                        webClient.Dispose();
                    }
                }
            }
        }

        public static string GetDirectoryListingRegexForUrl(string url)
        {
            if (url.Equals("HTTP://LOCALHOUST:1000000"))
            {
                return "<a href=\".*\">(?<name>.*)</a>";
            }
            throw new NotSupportedException();
        }

        private static bool DupeFile(string httpFile, string folderLocation)
        {
            string[] files = System.IO.Directory.GetFiles(folderLocation);
            foreach (string s in files)
            {
                if (System.IO.Path.GetFileName(s).ToString() == httpFile)
                {
                    return false;
                }
            }

            return true;
        }

        
    }
}

进入 'HTTP-server mode' 几天后,我得到了一个有效的问题解决方案,因此将其张贴在这里。此外,我了解 API 的工作原理和我提出的问题,尽管您边走边学,但并不完全清楚。

public async void GetPackages(string Feed, string Path, string filter, string retrievePath)
{
    //Putting a constraint on the Year and Month the file is put on the Http-Site; 
    //Need to feed in Api Storage Path {Feed} and {Path} + a place where to download the latest Zip file
    // e.g. {downloadPath}
    int yearCheck = DateTime.Now.Year;
    int monthCheck = DateTime.Now.Month - 2;
    string uri = $"{_httpClient.BaseAddress}/api/storage/{Feed}/{Path}";
    string responseText;
    var artifacts = new List<Artifact>();
    //After this we need to access the ResApi to get the list of all files included in the {Feed} Directory
    //At the moment this gives an Error. Though Executes the script correctly      
        responseText = await _httpClient.GetStringAsync(uri);
        var response = JsonConvert.DeserializeObject<GetPackagesResponse>(responseText);
    if (response.Children.Count < 1)
        return;              
    //Looping through the Array Children to find all Zip files from the last 3 Months. 
    foreach (var item in response.Children)
    {
        if (item.Folder)
            continue;

        var package = item.Uri.TrimStart('/'); 
        var fullPath = $"{uri}{item.Uri}";

        //The URI which used for downloading the particular .zip file. 
        //Additionally we will filter by the last Modified time. Mainly we loop through each element, 
        //Check the lastModified field and then check if this agrees our criteria. 
        var downloadUri = $"{_httpClient.BaseAddress}/{Feed}/{Path}/";
        var lastModified = await GetLastModified(fullPath);

        //Last Modified field is checked againts the year (needs to be current year) and
        // needs to be created in the last two months specified in variables
        if((int)lastModified.Year == yearCheck && (int)lastModified.Month>monthCheck)
        artifacts.Add(
            new Artifact(package, downloadUri, lastModified)
        );
    }

    
    //Filtering a list only with the needed files for update. 
    foreach (var articact in artifacts)
    {
        if (articact.Package.ToString().Contains(filter))
        {
            filteredList.Add(articact);
        }
    }

    //Creating a new list which is sorted by the LastModified field. 
    List<Artifact> SortedList = filteredList.OrderByDescending(o => o.LastModified.DayOfYear).ToList();

    //Downloading all the Files which does match the criteria. We should only Retrieve one File Mainly
    //  ArtifactRetrieve.DownloadDataArtifactory(SortedList[0].DownloadUri, SortedList[0].Package);
    // This one would be the first element in the list and should be the latest one availble. 
    //for (int i=0; i< SortedList.Count; i++)
    ArtifactRetrieve.DownloadDataArtifactory(SortedList[0].DownloadUri, SortedList[0].Package, retrievePath);
   
}

我的结论:

  1. 除了在您的脚本中放置一个循环之外,没有其他方法可以监控站点,这意味着如果我是正确的,System.FileWatcher 无法在 http-server 上被模仿。 (可能是错误的假设)。
  2. 我可以检查日期并在列表中对检索到的数据进行排序。这样可以更好地控制我正在下载哪些数据,最新的还是最早的。